I spent the last 60 days working at Cursor. It's been one of the most thrilling phases of my professional life.
There's a lot of mystique around the company. Over the last two months, some things matched my expectations; many did not.
I wrote an essay for @joincolossus about things that have surprised me about the company and its culture so far.
joincolossus.com/article/ins…
Semantic search improves our agent's accuracy across all frontier models, especially in large codebases where grep alone falls short.
Learn more about our results and how we trained an embedding model for retrieving code.
Finally, we've drastically improved the performance of using LSPs.
Python and TypeScript LSPs are now faster by default. Memory is dynamically configured based on available RAM.
We've also fixed a number of memory leaks and improved memory usage.
As part of our efforts to improve the agent harness in Cursor 2.0, we've significantly improved the quality of GPT-5-Codex responses.
It can now work uninterrupted for much longer, with reduced overthinking and more accurate edits. Enjoy!
Composer is a new model we built at Cursor. We used RL to train a big MoE model to be really good at real-world coding, and also very fast.
cursor.com/blog/composer
Excited for the potential of building specialized models to help in critical domains.