Still bizarre how Meituan is making LLMs, and it's not a run-of-the-mill recipe, they legit have done serious research here, improving on V3 architecture. Not sure about benchmarks, learning good data methods might take more time. But bizarre and fascinating.
🚀 LongCat-Flash-Thinking: Smarter reasoning, leaner costs!
🏆 Performance: SOTA open-source models
on Logic/Math/Coding/Agent tasks
📊 Efficiency: 64.5% fewer tokens to hit top-tier accuracy on AIME25 with native tool use, agent-friendly
⚙️ Infrastructure: Async RL achieves a 3x speedup over Sync frameworks
🔗Model:
huggingface.co/meituan-longc…
💻 Try Now:
longcat.ai