Our 7th gen TPU Ironwood is coming to GA!  It’s our most powerful TPU yet: 10X peak performance improvement vs. TPU v5p, and more than 4X better performance per chip for both training + inference workloads vs. TPU v6e (Trillium). We use TPUs to train + serve our own frontier models, including Gemini, and we’re excited to make the latest generation available to @googlecloud customers.

Nov 6, 2025 · 4:01 PM UTC

Replying to @sundarpichai
Impressive
104
81
8
1,193
Replying to @sundarpichai
@grok compare this with the equivalent best NVIDIA chip (if it exists)
1
1
Replying to @sundarpichai
I don't believe you at all. Sell the old TPUs on eBay to prove you have new ones.
2
1
22
Replying to @sundarpichai
Pushing a tenfold leap in raw power while quadrupling efficiency on every chip feels like handing creators a lightsaber for the AI wars. This is how we keep the frontier wide open.
Replying to @sundarpichai
Sundar, I remember when we were hanging out at a party in Redmond, and I pull the clutch move of ordering pizza at 2am. You looked over and said: "Trashpanda here, have a retired data center grade TPU to hang on your wall as a token of our friendship" as you tossed over a 4u chassis
2
1
3
Replying to @sundarpichai
Can this be used for Waymo cars to improve self-driving, as Tesla will do with AI5?
1
Replying to @sundarpichai
@grokdid Broadcom make this product?
Replying to @sundarpichai
10× performance vs v5p and 4× per-chip vs v6e? That’s not an upgrade — that’s a shot fired. Google isn’t just launching a faster TPU. They’re going straight after Nvidia’s chokehold on AI compute. The real AI race isn’t model vs. model anymore. It’s who controls the compute that trains them.
1
4
Replying to @sundarpichai
Impressive leap. Google’s vertical integration in hardware and model training is starting to look like one of its biggest competitive advantages.
Replying to @sundarpichai
Sir please forgive me. *** digital apparition*** appeared on a platform related (Its just 2D YT) However. This is new and concerning access all platforms. Understood? Fix it and let them all be notified. If not it will look very auspicious. Thx
Replying to @sundarpichai
We will be in talks with @google
Replying to @sundarpichai
The new TPU sounds less like a chip and more like a jet engine for data. Ten times faster means your AI can now overthink in record time. At this rate, machines will start finishing our sentences before we even start them.
1
Replying to @sundarpichai
Very cool. Competition in training silicon is good for the whole ecosystem
1
Replying to @sundarpichai
can TPUs be put in a future Nest Hub to run my smart home locally?
Replying to @sundarpichai
Incredible! I need to invest more in Alphabet 😎
6
Replying to @sundarpichai
Those are huge!
1
Replying to @sundarpichai
10X performance in a single generation is wild. Beyond the numbers, I’m curious how Ironwood will handle real-world AI workloads, latency, energy balance, and scalability. Progress in compute is great, but efficiency is where real impact lies. ⚙️ @googlecloud
1
2
Replying to @sundarpichai
Crazy performance And no request for government handout great job!
Replying to @sundarpichai
How will this boost AI capabilities and efficiency
Replying to @sundarpichai
10X performance improvement validates Google's TPU roadmap. They use it internally for Gemini, now opening to cloud customers. Proof of capability plus revenue opportunity. Win on both fronts.
1
13
Replying to @sundarpichai
That's just awesome
1
Replying to @sundarpichai
Big chips go brrrr.
Replying to @sundarpichai
Oef Nice one. ☝️
Replying to @sundarpichai
@AskPerplexity how does this compare to Nvidia beat chip?
Replying to @sundarpichai
10X peak performance vs v5p is a massive claim, but what are the typical performance gains customers can expect on real-world training tasks, not just peak? And 4X better performance than Trillium is great, but how does the performance-per-watt and total cost of ownership compare? When will it be widely available for Cloud customers beyond a limited preview?
2
Replying to @sundarpichai
Hey Mr. Google, why does Gemini suck so bad, and why is @grok so much better?
Replying to @sundarpichai
$GOOGL controls the integrated stack from Gemini models → Ironwood chips → Google Cloud ⚡ This tight vertical loop could redefine inference cost structures, challenge $NVDA’s monopoly on AI infrastructure, and push the market toward diversified, energy efficient compute ecosystems
6
Replying to @sundarpichai
Impressive leap from Google! The TPU v7 Ironwood sounds like a true powerhouse — 10x performance vs v5p and 4x vs v6e is game-changing for large-scale AI workloads. Excited to see what this means for the next wave of frontier models. #AI #GoogleCloud
2
Replying to @sundarpichai
A 10X jump in peak performance is insane. Ironwood looks like the backbone for the next wave of frontier training—can’t wait to see how Gemini 3.0 scales on it. ⚡
Replying to @sundarpichai
This is the @Google way: exponential progress in the last year. Impressive! Firing on all cylinders 🚀🚀🚀
Replying to @sundarpichai
TPUs just leveled up like they hit cheat codes.
Replying to @sundarpichai
Huge leap in performance 10x over v5p is wild. Excited to see how Ironwood pushes large-scale AI training and inference to the next level. 🚀