How can OpenAI with $13 billion in revenues make $1.4 trillion of spend commitments? (Source:
@BG2Pod )
Sam Altman: “First of all. We’re doing well more revenue than that. Second of all, Brad, if you want to sell your shares, I'll find you a buyer. I just, enough. I think there's a lot of people who would love to buy OpenAI shares. I think people who talk with a lot of breathless concern about our compute stuff or whatever, that would be thrilled to buy shares. So I think we could sell your shares or anybody else's to some of the people who are making the most noise on Twitter about this very quickly. We do plan for revenue to grow steeply. Revenue is growing steeply. We are taking a forward bet that it's going to continue to grow and that not only will ChatGPT keep growing, but we will be able to become one of the important AI clouds, that our consumer device business will be a significant and important thing, that AI that can automate science will create huge value. There are not many times that I want to be a public company, but one of the rare times it's appealing is when those people are writing these ridiculous OpenAI is about to go out of business. I would love to tell them they could just short the stock, and I would love to see them get burned on that. But we carefully plan. We understand where the technology, where the capability is going to grow and how the products we can build around that and the revenue we can generate. We might screw it up. This is the bet that we're making and we're taking a risk along with that. A certain risk is if we don't have the compute, we will not be able to generate the revenue or make the models at this kind of scale.”
Satya Nadella: “And let me just say one thing as both a partner and an investor. There is not been a single business plan that I've seen from OpenAI that they've put in and not beaten it. So in some sense, this is the one place where in terms of their growth and just even the business, it's been unbelievable execution, quite frankly. I mean, obviously, OpenAI, everyone talks about all the success and the usage and what have you. But even I'd say all up, the business execution has been just pretty unbelievable.”
Why AI is Underhyped and Isn't a Bubble Yet
Here's why AI isn't a bubble today. I've distilled the data points and ideas from my previous columns and coverage. If you prefer facts and evidence-based reality over vibes and conflated narratives, enjoy!
-Big Tech valuations are reasonable and leverage is low. We're at the beginning of multiple AI super product cycles in the year ahead
-We are in the early innings of a technology computing shift to AI, the largest in decades. Think 1994 versus 1999.
-Every credible source reports overwhelming demand for AI computing capacity. Where is the overcapacity glut? Nowhere.
Valuations
-During the dotcom bubble, top technology stocks like Microsoft and Cisco traded at 70 to 100 times forward P/E in December 1999 (remember these stocks went significantly higher in Q1 2000 too) versus today's top stocks (Nvidia etc.) trading at 30x with materially better underlying fundamental growth prospects.
-The positive cycle sentiment remains in early stages. The Netscape IPO occurred in 1995, five years before the cycle peak. Is there any doubt the eventual OpenAI IPO will be a Netscape-type success? AI is only two to three years in. Think 1994 versus 1999.
-The major buyers of AI infrastructure are the most profitable businesses in history, each generating roughly $100 billion in annual net profit. This is nothing like the debt-driven fiber optic network buildout of the dotcom era.
Generational Secular Computing Shift
-The global TAM for IT spending is $5-6 trillion annually. The entire technology stack needs restructuring and rearchitecting for AI because this new technology produces better results for corporations and consumers. You either use it or get disrupted by rivals who do.
-Think electrification. Think the advent of the microprocessor and personal computer in the early 1980s. It's that significant.
-The first growth wave was driven by natural language processing with LLMs and the application of parallel GPU computing. For the first time, computers could understand context from user language input. Previously, if you typed one character wrong in a query request, the computer would fail. GPUs allowed companies to apply incredible computing power to distill and compress knowledge from vast pools of unstructured data.
-The current second wave of growth is driven by reasoning models that are significantly more accurate and useful by spending more time working on requests and searching dozens of websites. Reasoning models use 100 to 1,000 times more compute resources than prior models. The next wave will feature increased use of multi-modal models (audio/video) and agents (workflow automation, multi-step tasks). The following wave will be driven by physical AI, drug discovery, factory simulation, and robotics.
AI Bubble Skeptic Arguments
-Skeptics focus on simplistic narratives without nuance. It's up a lot, so it must be a bubble. OpenAI is losing a lot money!
-This mirrors narratives from earlier this year when skeptics claimed the DeepSeek moment meant expensive training runs were no longer needed because DeepSeek was trained with only $6 million in spending (a false statement especially as literally in the same week China announced $140 billion in new AI investment). They also claimed DeepSeek's efficiency meant a computing glut was ahead. Both views proved categorically false and misleading as reasoning models (including DeepSeek!) and better AI models sparked an exponential wave of AI computing demand over the last nine months.
-OpenAI's business model divides into two parts. Training new models requires enormous fixed investment, with each model driving roughly 10x more spending. Inference, serving already developed and built models, is extremely profitable.
-Do you believe leading AI edge models will be a foundational technology transforming every business worldwide? The answer is yes. Companies view the risk as existential.
-100% of Nvidia's engineers use AI coding assistants like Cursor. These AI assistants make engineers more productive by autocompleting code and automatically fixing bugs. All developers will use AI assistants in the future. Customer service? Same. AI will improve chatbots for customer service while giving live agents all the information they need from prior conversations and the latest product updates to improve service quality. Sales agents? Same. Product research and development? Same. AI will test product ideas and iterations while simulating every permutation.
-This is happening and will continue happening. It’s inevitable. Every company will use AI in every part of their business. I spoke with executives from Nvidia, Dell, and Microsoft this month. It's occurring everywhere and accelerating.
Circularity
-Thus far, the vast majority of spending has come from profitable large technology companies. Now concerns about circular vendor spend is rising.
-Despite what you hear, much of this hasn't occurred yet, and each gigawatt of spending depends on specific performance technical milestones on both sides (AMD-OpenAI deal).
-Yes, the Oracle-OpenAI deal is real with hundreds of billions in RPO. Let's assume it's $300 billion over five years, requiring roughly $60 billion annually.
-Alphabet/Google generates about $400 billion in sales with $120 billion profit. Meta generates $200 billion in sales with $73 billion profit.
-Markets and investors look at the potential future not the present or past. The question becomes can OpenAI eventually become a technology giant using their AI model advantage and product execution abilities? ChatGPT's user base has grown from 0 to 800 million in three years. Can it reach 2 or 3 billion in another three years? It seems achievable. Like Google in its early years, ChatGPT currently has a scale advantage where the large user base has started a flywheel, enabling ChatGPT to iterate, improve, get more data, and A/B test better answers for users.
Nvidia NVL72 Super Cycle – Pre-iPhone Moment
-Dan Benton noted that technology investing revolves around product cycles. In just over two years, Nvidia has increased its quarterly data center revenue tenfold. As impressive as this is, the company will likely experience its best product cycle ever over the next few quarters, comparable to the iPhone launch.
-Nvidia has recently begun shipping GB200/GB300 NVL72 servers in volume for the first time. Foxconn, a major AI server manufacturer for Nvidia, reports that current quarter AI server revenue will increase 170% year-over-year.
-What is the NVL72? The first AI server with 72 GPUs in one server rack versus 8 GPUs in prior models. It offers 25 to 30 times better performance than the previous model and will enable more AI capabilities and use cases. Each server weighs 1.5 tons, contains 5,000 copper cables (2 miles), and offers unprecedented computing density.
AI Demand Signs
-EVERY credible industry source reports off-the-charts demand and overwhelming compute shortages everywhere. EVERY SINGLE ONE.
-Microsoft's Scott Guthrie, head of the Cloud + AI group at Microsoft and member of the senior executive team, told me this month he sees an "explosion" of AI infrastructure usage ahead, driven by reasoning/agents and workflow automation.
-TD Cowen: "Based on our checks, 3Q25 would represent the largest inflection in demand we have seen since the inception of the data center industry... a staggering ~7.4GW of U.S. data center capacity was leased by hyperscalers in 3Q25, which would exceed all capacity leased in 2024." Largest inflection in demand in history. Right now. More capacity in one quarter than all of last year.
-TSMC's CEO says AI demand is "stronger than" they thought three months ago. Given the strength three months ago, that's remarkable. He then literally describes today's growth as "insane," strongly hinting at guidance and capex raises in January.
-Four-year-old A100 GPUs remain profitable today on inference tasks.
-Microsoft AI startup employee states: "Many days it feels my whole job is begging for GPUs. It's been that way since 2020 and hasn't become easier."
-Anthropic's ARR grew from $1 billion to $7 billion in 10 months.
-Crusoe CEO: "Every single customer we talk to is compute-constrained right now."
-AI token consumption data points from Microsoft, OpenAI, and Google show exponential growth YTD.
Conclusion
-Hold on to your hats. We're just getting started. The next several quarters are going to be off the charts.