An advanced version of Gemini 2.5 Deep Think has achieved gold-medal level performance at the ICPC 2025 - one of the world’s most prestigious programming contests. 🏅
Building on the model's success in math at the IMO, this marks another historic milestone for advanced AI. 🧵
Sep 17, 2025 · 5:09 PM UTC
🧮 The ICPC has participants from nearly 3000 universities, from over 103 countries, compete to solve real-world coding problems.
This year, our model competed in a live remote online environment, under the same five-hour time constraint as university contestants. goo.gle/4nuNAhK
💡 Competitive programming is a vital training ground for AI problem-solving as it needs logical deduction, creative algorithm design, and thorough execution.
Gemini 2.5 Deep Think was able to solve 10 out of 12 problems, and would be ranked 2nd overall if compared with university teams. ↓
🔢 In the contest, our model successfully solved Problem C - which no other university teams solved.
It involved complex optimization, finding the most effective way to distribute and fill a set of reservoirs as quickly as possible. Gemini was able to divide the problem into smaller sub-problems, solving it within the first half hour.
Our performance brings together a series of breakthroughs across:
🔘 Pre-training and post-training
🔘 Novel reinforcement learning techniques
🔘 Multi-step reasoning
🔘 Parallel thinking
These help Gemini explore ways of solving complex problems and verifying solutions.
Achieving a gold-level standard at the ICPC shows that AI could act as a true problem-solving partner for programmers - and signals that in the near future, much smarter AI coding assistants could help developers tackle complex engineering challenges. ↓
goo.gle/4nuNAhK


































