As part of our efforts to improve the agent harness in Cursor 2.0, we've significantly improved the quality of GPT-5-Codex responses.
It can now work uninterrupted for much longer, with reduced overthinking and more accurate edits. Enjoy!
Finally, we've drastically improved the performance of using LSPs.
Python and TypeScript LSPs are now faster by default. Memory is dynamically configured based on available RAM.
We've also fixed a number of memory leaks and improved memory usage.
this is huge for cost optimization. planning on gpt-4o then deploying distilled models could slash inference costs by 80%+ while keeping quality. the cloud flexibility here is the real game changer - test locally, scale globally 🚀