Last week in AI: - OpenAI restructured into a Public Benefit Corp with a $1.4T compute roadmap. - Poolside raised 1B at 12B valuation and Eric Zelikman secured $1B for a new venture. - Gemini API prices slashed: 50% off Batch API and 90% off context caching. - Google AI Pro plans (with Gemini 2.5) are rolling out via Jio in India. - Moonshot AI shipped Kimi Linear, a hybrid model with 6x decoding throughput. - Hugging Face released the 200+ page "Smol Training Playbook" for LLMs. - vLLM introduced "Sleep Mode" for instant, zero-reload model switching. - New findings suggest switching from BF16 to FP16 reduces RL fine-tuning divergence. - Epoch AI analysis shows open-weight models now catch up to closed SOTA in ~3.5 months.
6
6
1
80