xAI quietly dropped a massive update for Grok-4-Fast today.
Use Grok4-Heavy + Grok-4-Fast + Grok-Code-Fast-1 for code. Grok4 is the best model in the world right now.
Reasoning Mode:
Jumped from 77.5% to 94.1% completion rate.
Non-Reasoning Mode:
Jumped from 77.9% to 97.9%.
2M context, unified modes, and SOTA.
API: $0.20/M input tokens, $0.50/M output. SuperGrok/Premium+ for unlimited.
Multimodal (text + images) with built-in web/X search.
If those numbers hold up consistently, it really signals a major shift in accessible, high-performance LLMs. It's exciting to see open development pushing the envelope this aggressively, making top-tier capabilities available sooner than many expected.
Nov 8, 2025 · 10:23 PM UTC


