Filter
Exclude
Time range
-
Near
Microsoft’s new Fairwater AI datacenter in Wisconsin is a blueprint for training at frontier scale: • Hundreds of thousands of NVIDIA GB200s on a single flat fabric • ~2 GW power with closed-loop liquid cooling (zero operational water use) • 125k miles of optical fiber (≈4.5× around Earth) tying racks into one supercomputer • 315 acres / 1.2M sq ft, engineered so one job can span tens of thousands of GPUs day-1 What stands out: this is co-designed compute + network + storage, not just more GPUs. Expect larger batch sizes, longer horizons, faster refactors, and distributed training across regions via Azure’s AI WAN. The constraint shifts from “get more chips” to move tokens and heat efficiently. Questions we’re watching: • How fast can jobs schedule across multi-pod, multi-region fabrics without tail-latency spikes? • Can closed-loop cooling + renewable matching keep PUE and WUE stable at peak loads? • What does 10× current top supercomputer mean for model iteration cadence and unit economics?
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
1
2
Big tech doesn't want to build themselves? Lmao They need all they can get, be it self built or outsourced.
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
👀🔥
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
Microsoft unveils Fairwater AI datacentre in Wisconsin, spanning 315 acres with 1.2 million square feet, integrating hundreds of thousands of NVIDIA GB200 GPUs to achieve 10 times the performance of today's fastest supercomputer for AI tasks. x.com/satyanadella/status/19… Under construction on repurposed Foxconn site with completion eyed for 2026, it features closed-loop liquid cooling; Microsoft plans similar facilities via partnerships in Norway and the UK.
Satya Nadella
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
2
5
22
0
Intelligence is not log of compute.
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
ICYMI
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
1
1
2
Thrilled to contribute to the #Fairwater project by enabling high performance backend network and helping build the world’s most powerful AI supercomputer! Grateful for this incredible opportunity. #AI #Supercomputing
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
Giant new Microsoft AI datacenter in southeast Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times.
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
Send your CAPEX to Nvidia who sends it to TSMC.
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
1
1
15
If, on the other hand, intelligence is the sine of compute, we're clearly in a downward cycle and you should stop building data centers. And if it were a unicorn, it would prance beautifully.
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
That's a big "if".
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
1
2
Hybris ingenieril.
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
"フェアウォーターは、建設後に運用で水を一切必要としない液体冷却の閉ループシステムを採用しています。"
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
Replying to @mr_abundance_
Here we go Mr A.
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
1
Quina barbaritat de xifres. Els models d'IA generativa esmorzant centre de dades rere centre de dades, alimentant-se de centenars de milers de GPU's i berenant l'energia de dues centrals nuclears.
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
Compute $$$
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
Liquid cooled and matching energy consumed with renewables. I’m curious how they’re doing that. It’s going to be super power hungry.
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
Impressive!
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
Willing to bet they're using the raceway pond to dump heat - in an attempt to not decimate the small town's water supply where this datacenter is located?
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
@leijun @Xiaomi @WilliamLuXiaomi we cannot run away from AI and chips n data centres
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.