Filter
Exclude
Time range
-
Near
aditya prakash venky retweeted
Cooling technologies: for Data Centres are so advanced that Microsoft’s latest Fairwater data centre cluster in US uses a liquid-cooled closed-loop system for cooling GPUs that requires ZERO WATER for operations after construction!
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
1
3
7
@GoogleIndia @GoogleAI Project just announced coming up at Visakhapatnam,India :: A. How much water it consumes daily for it's operations?. B.per below" ...... uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction." What is this alternate system being planned by Mr nadella? doesn't it require water at all ? Pl clarify.
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
Replying to @AvmNews7
Chaduvarani Moorkhulaki idi!!
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
11
Neat, that compute power is definitely substantial
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
1
1
1
Remarkable energy commitment levels
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
1
4
impressive scaling of their fleet
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
impressive capacity growth
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
2
i am sure progress comes at a certain cost
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
Yes
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
Does the physical architecture of data centers have to be so dreary?
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
You are right of course, Microsoft will be made redundant within a generation.
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
the future is looking very compute-heavy
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
1
2
One hopes tech lords finally shut the fuck up about how the little people (who'll see no upside to this) need to "green this and conserve that" Maybe their new compelling need to be even richer will drown their environmental piety bullshit.
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.
Fairwater is a monumental step toward Abundant Intelligence. This exponential compute scale will redefine the Canada BPO sector, shifting our focus from automating tasks to orchestrating unprecedented AI power for high-value client solutions. #CanadaBPO #AI
If intelligence is the log of compute… it starts with a lot of compute! And that’s why we’re scaling our GPU fleet faster than anyone else. Just last year, we added over 2 gigawatts of new capacity – roughly the output of 2 nuclear power plants. And today we’re going further, announcing the world's most powerful AI datacenter, located in southeastern Wisconsin. Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times. It will deliver 10x the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. For AI training workloads, you need compute at exponential scale. That’s why we designed the datacenter, GPU fleet, and network together as one integrated system. This ensures a single job can run from day 1 at exponential scale across thousands of GPUs. Fairwater uses a liquid-cooled closed-loop system for cooling GPUs that requires zero water for operations after construction. And we’re matching all of the energy that is consumed with renewable sources. And of course, it is just one of several similar sites we’re lighting up across our 70+ regions. We have multiple identical Fairwater datacenters under construction in other locations across the US, in addition to our AI infrastructure already deployed in over 100 datacenters around the world, powering model training, test-time compute, RL tuning, and real-time inference at global scale. Too often during times like this, people go with the current and only later wonder, how did we get here? With Fairwater, we're charting a new path: doing the hard engineering work, bringing compute, network, and storage into one highly scaled cluster, and designing closed-loop energy systems to meet real-world computing needs. And partnering with local communities to ensure it's thoughtfully done in a way that is sustainable, creates new jobs, and expands opportunity. We are thrilled to see this take hold in Wisconsin, and we are just getting started.