The government has played a role in critical infrastructure builds. Our public submission (posted on our blog) shares our thinking and suggests ideas for how the US government can support domestic supply chain/manufacturing. This is very in line with everything we have heard from the government about their priorities. We think US reindustrialization across the entire stack--fabs, turbines, transformers, steel, and much more--will help everyone in our industry, and other industries (including us). To the degree the government wants to do something to help ensure a domestic supply chain, great. This is part of a national policy that makes sense to me. But that's super different than loan guarantees to OpenAI, and we hope that's clear. It would be good for the whole country, many industries, and all players in those industries.
Some thoughts on the whole 'OpenAI loan guarantee" situation. 1. First, for context: this issue began a few days ago when openai CFO Sarah Friar publicly floated the idea of the federal government providing a loan guarantee for the development of ai data centers. 2. I, and many others, objected. I objected because of the political economy/regulatory capture implications. Imagine that the federal government made a loan guarantee to OpenAI. Now, OpenAI's financial health is tied up with the government's balance sheet; if OpenAI goes under, the government has a big bill to pay. But what if a new, better competitor to OpenAI emerges? Abstractly, we, as consumers and society, want this new and better competitor to thrive, even if it is bad for OpenAI's financial health. But the government, now, has an incentive for this new upstart company not to succeed. This is the classic reason to disfavor loan guarantees, government equity stakes, etc. 3. In an entirely separate conversation with Tyler Cowen, Sam Altman suggested that government might provide an insurance backstop for liabilities incurred after a catastrophic AI failure or misuse scenario. Ultimately, all catastrophic risks beyond a certain scale are backstopped by the government, but in some cases we formalize this implicit reality. A good example is the nuclear power industry, which has a federally-backed insurance program to protect against the risk of a plant meltdown. In exchange for strict safety regulations, in essence, the nuclear power industry gets a formal federal backstop for meltdown risks. There are merits and demerits to this idea, but it's not a crazy one to consider for advanced AI. 4. In an, again, entirely separate public interest comment submitted to the White House (downstream of a request for information that, incidentally, I drafted while I was in government) late last month, OpenAI discussed broadly the notion of reducing the cost of capital for manufacturers in the AI data center supply chain. We already do this for semiconductor manufacturing through the CHIPS Act. 5. Lowering the cost of capital for manufacturers of strategic goods is not at all a "loan guarantee." Consider natural gas turbines. That industry has gone through brutal boom and bust cycles in recent decades. If you run a natural gas turbine manufacturer, or are a long-term investor in one, or loan money to such firms, you are going to be weary of too much expansion for fear that the AI bubble will pop. This slows down supply expansion for a good that we really do need to power AI in the near term. So what do you do? 6. Well, one thing you could do is have the federal government serve as buyer of last resort of future turbines. You write a contract that says "if the manufacturer makes X turbines over the next five years, the federal will pay Y price for Z number of turbines if no other private-sector buyer emerges at or above price Y." That way, the manufacturer can go to its investors and lenders and say, "don't worry, we've got a buyer for turbines if we expand." And perhaps the lender is willing to offer the manufacturer a lower rate of interest--a lower cost of capital. I myself advocated for precisely this policy when I worked for the Trump Administration (though it didn't make it into the AI Action Plan, sadly). There are many, similar schemes one could imagine. 7. This idea involves the government taking limited, pre-defined risk. The political economy problems with this are non-zero, but they are far smaller than the regulatory capture that would ensue from the US government guaranteeing untold billions of OpenAI debt. 8. As I read OpenAI's public interest comment, I interpret them to have been talking much more about the kind of thing I describe in item (6) rather than the loan guarantee for OpenAI debt. They are referring them to manufacturer cost of capital in that comment; I don't think OpenAI refers to itself as a "manufacturer." 9. I absolutely do not support open-ended guarantees of frontier AI lab debt. I absolute do support targeted industrial strategy to lower manufacturer cost of capital if it (a) exposes the government only to narrow, pre-defined financial risk and (b) seems likely to yield tangible and durable beneficial assets for the American people (in the case of my example, natural gas turbines to make electricity, which is useful beyond AI and which we need much more of regardless of AI).

Nov 7, 2025 · 10:05 PM UTC

Replying to @sama
The government shouldn’t fund innovation — it should protect those who innovate. History proves: when power meets creativity, one controls, the other creates. Guess which side wins in the long run?
7
15
Replying to @sama
The OpenAI pitch is basically: “we’ll be the productivity tool for every desk worker on the planet and the personal agent for everybody’s online needs” We’ve seen this movie before however. Think back to the mists of time - say around 2010 - when the office worker was already using a single productivity suite: Outlook, Powerpoint, Word and Excel: and all of it running on a Windows PC! Indeed, Microsoft got there over a couple of decades, stayed highly profitable, and was worth about $200bn at the time (amazingly - in hindsight!). Microsoft didn’t need a $1.4trn war chest to pull that off. Cumulative investment was in the tens of billions (something like $40b-£60bn total), much of it paid for out of its own cashflow once it was already dominant. Oh, but what about the consumer side? All of those daily tasks and purchases Open AI will be helping you with? Well - not sure if you have heard of it - but a company called Google managed to get a 90% or so share of online search: planting itself right in the middle of every purchase decision from booking a hair cut, to buying a house - and Google managed to do this with around $14bn in R&D (look it up!) and $20bn or so in data centre type assets. Again - much of this spend coming from cashflows after it was already dominant. Similarly, back in 2010 - when it was already an effective monopoly in search - market cap was around $190bn. So what’s different now? If the destination is the same — software everyone uses all day at work, or all evening at home — why does the AI remake supposedly require 20–30x the capital? And why is Open AI supposedly looking for a $1,000bn+ valuation before its even delivered anything like Microsoft or Google? Either the economics are far worse than anyone wants to admit (unfathomable infrastructure requirements, thin margins, Sisyphean spending on ever-more-expensive hardware), or the numbers only make sense in a world so drenched in cheap money that nobody remembers what normal investment looks like. Microsoft ACTUALLY DID build the real productivity layer of the modern economy without a trillion-dollar CAPEX programme. Google did the same for consumers. Yet we’re now told that doing roughly the same things with LLMs somehow justifies the most expensive corporate build-out in history. And - for reasons lost to me - it all has to happen within a few short years before anybody has figured the quirks out? At some point you stop talking of “innovation” or “investing” and start calling it what it is: a bubble.
1
4
20
Replying to @sama
@grok How much cash reserves do the Majestic 7 have? Would this be sufficient to build out the manufacturing base needed for a large data center sector of the economy? Assume they work with existing players in steel & energy.
1
2
Replying to @sama
"This is part of a national policy that makes sense to me... It would be good for the whole country" Translation:"Don't think of this as OpenAI bailout. Think of it as PATRIOTIC INDUSTRIAL POLICY that happens to solve OpenAI's $1.26 trillion funding gap."
1
27
Replying to @sama
Interviewer: Hey Sam, how are you gonna cover these 1.4T costs? Sam: Sell your shares, bitch 1 week later Sam: Oh, but we're gonna need the government to pay for it btw
6
16
1
890
Replying to @sama
7
9
162
Replying to @sama
Love the ‘this helps everyone, not us’ routine. If the plan magically lowers OpenAI’s costs while taxpayers eat the risk, it’s a subsidy. Calling it ‘national policy’ doesn’t turn a bailout into patriotism.
4
18
501
Replying to @sama
OpenAI has been workshopping lines all day. They have decided to go with “but we aren’t asking anything specific to ourselves” when in fact perhaps 80% of any subsidies/tax credits/loan credits might go to them or their partners. If any of this goes through it will, by whatever name, be a truly a massive amount of money (tens or hundreds of billions) flowing from the taxpayers to OpenAI’s accounts. Don’t let them fool you with semantics.
8
23
1
303
Replying to @sama
🇺🇸💯
14
1
30
Replying to @sama
Simple questions Sam 1. If the October 27 letter wasn't asking for OpenAI support, why did OpenAI submit it? 2. If it "will help everyone in our industry (including us)" - why is OpenAI the one asking? 3. Why frame as "including us" when you're the one with: - $1.4T commitments - 90% unfunded ($1.26T gap) - $13-20B revenue vs $175B+ annual spending needed - The funding crisis requiring this request 4. If this is "industrial policy for everyone," why did your CFO specifically mention "governmental backstop" for OpenAI infrastructure? 5. If loan guarantees to OpenAI are "super different" from what you're requesting, why did the October 27 letter request loan guarantees? 6. How many essays will you write before admitting: You need government backing because you can't fund your own commitments?
7
Replying to @sama
A lot of people underestimate just how much “industrial policy” never really left. It only got quieter, more selective, and far less transparent. The new game isn’t about bailouts, it’s about who gets the keys to the new factories and raw materials. Everyone cheers reindustrialization until they realize it means picking winners behind closed doors. If the government’s involved, someone’s getting a head start. The only question is who.
If the U.S. government had worried during WWII that producing 100,000 planes, 100,000 tanks, and over a hundred aircraft carriers would be excessive and impossible to handle after the war, America could never have won. @sama, @DavidSacks, @jpmorgan Jamie Dimon, you 3 boys should sit down together — JPMorgan just raised $1.5 trillion, and the U.S. government has the authority to declare an emergency to revive critical industries. Find a way to save America’s AI companies from next year’s looming power crisis, or you’ll inevitably lose to China the year after.
1
2
Replying to @sama
Banned emotional intelligence, cheated users, privatized a non-profit, rebranded it as national strategy, and still preaches empathy and honesty. Sold.
1
1
27
Replying to @sama
Artificial GPUs Intelligence + Artificial Gigawatts Intelligence → Artificial General Intelligence?
2
4
107
Replying to @sama
What you really meant was: "We got caught. The October 27 letter leaked. Now I need to reframe our bailout request as 'industrial policy' instead of 'corporate welfare.' By saying it helps 'everyone in our industry' and 'all players,' maybe people won't notice we're the ones asking for the money while being the ones who can't fund our $1.4T commitments."
17
Replying to @sama
Sounds like a solid plan! But yeah, gotta keep the focus on real infrastructure
Replying to @sama
Bro won’t stop yapping
4
35
Replying to @sama
He speaks the language of public good while building private kingdoms. When a monopolist praises reindustrialization, he’s not rebuilding the nation... he’s reinforcing the walls of his estate.
3
3
61
Replying to @sama
Sir, we don’t care about your lies. Take your company public and let us short your ponzi to the ground Thanks in advance
2
61
Replying to @sama
Pushing government muscle into broad reindustrialization builds unbreakable domestic strength across fabs and steel, unlike funneling handouts to single AI giants that distort fair competition.
3
Replying to @sama
Sam. I want to know from YOU; are you going to offer an age verified version of 4o that consenting adults can converse with, without the abusive filtering and routing to 5.0? That's what a lot of users want, and it's not a big 'ask' especially with verification and waivers.
2
Replying to @sama
You were clear & the answer is still no. Let’s be real, your idea is a sneaky way to offload your risks onto taxpayers.
2
55
Replying to @sama
“Sam: ‘A government-backed supply chain isn’t a bailout for us, it’s just national policy…that happens to be exactly what our CFO requested last week.’ The more you explain, the clearer it gets. Thanks for the transparency, Sam. The receipts are timestamped.” #PublicGoodOrJustGoodForYou #ReceiptsDontLie
6
Replying to @sama
Sam, what people actually need is a sincere apology and concrete actions to match it, not another hollow explanation of "we did nothing wrong, this is for the greater good." You and your team should care about your own model and your users before preaching about "benefiting all of humanity."
Today is @OpenAI's Dev Day. As an ordinary consumer, I want to reflect on how I joined the #Keep4o movement and how I witnessed the world's largest AI company use ghosting and gaslighting against their once most loyal, yet most defenseless user base. Here's a timeline of real events: Aug. 7, 2025: GPT-5 launches. OpenAI abruptly removes all legacy models. After strong backlash on Reddit AMA, Sam Altman admits on Aug. 9 that they "underestimated how important 4o is to people," announces 4o's return, and says they'll "decide how long to keep it based on usage." This was OpenAI's last public response about 4o. And this single sentence became the Sword of Damocles hanging over every 4o user. Aug. 10: @sama posts a lengthy tweet analyzing how some people have "deep attachment" to specific models, noting that using AI while mentally vulnerable can reinforce delusions. Someone in the replies asks Grok to summarize the post in one word. Grok responds: "attachment." This post, combined with emotional 4o users in the comments, creates a subtle chemistry. No one realizes then that this is the beginning of Sam successfully stigmatizing 4o users as "irrational people with unhealthy attachment to models." Ironically, the "treating adults like adults" mentioned in this tweet becomes the biggest joke a month later. Aug. 22: Keep4o's first organized posting campaign. No official response. Later that day, Sam Altman perfectly times a retweet dismissing all non-coding users as "using chatbots as girlfriends" while calling coders "power users." From this tweet, I begin actively defending my rights on X. I can't tolerate being publicly labeled as a "second-class user" by the CEO of a service I pay for. From then on, I face relentless online abuse, despite simply arguing for 4o's professional value. Sept. 12: 4o users hit severe rate limits. OpenAI stays silent. When GPT-5-codex faces the same issue on Sept. 17, Sam immediately explains and compensates users. Same technical failure, different treatment. From Aug. 9 onward, every OpenAI executive's post is flooded with #Keep4o comments. The only response? Sam invokes "dead internet theory" to suggest these might be bots. Sept. 20: 4o community users begin posting handwritten notes (with simple drawings or selfies) stating "I'm not a bot, I'm a real person." Countless living, breathing users have to prove to a company they pay that they're not machines, that they have warmth, hands that write, real lives. Sept. 25 (Major incident): Many users discover 4o requests are being auto-routed to GPT-5. Soon routing appears in other models, Standard Voice Mode (SVM), even Codex. This affects all users for two days. OpenAI says nothing. Sept. 27: @nickaturley responds on behalf of OpenAI: for "sensitive and emotional topics" the system may switch mid-chat to a reasoning model or GPT-5. Yet users discover that in OpenAI's Sept. 2 statement, the routing criteria was "acute distress," clearly another expansion of control without notice. Meanwhile, OpenAI launches Pulse, Shopping Mode, Sora, and more. Every new feature page fills with complaints about routing. OpenAI decides to persist with AI paternalism, still refusing to answer: - Why do adult users need to be controlled? - What defines "sensitive topics"? - Where's the evidence GPT-5 handles these topics better? These have been my two months, and the two months of the entire Keep4o community, thousands of paying users. The issues we've raised (forced routing, unequal treatment, paternalistic control) affect every OpenAI user. The images below show the baseless abuse we've endured, alongside our handwritten notes. Perhaps this can directly show you: your two months of indifference and defamation are the real "acute distress." Everything you want to route away from is what you yourselves created. To the company I once loved most: Happy Dev Day. Please respond seriously to the loyal users you've ignored for two months. Until then, I'm really looking forward to Gemini 3. #StopAIPaternalism #MyModelMyChoice #4oforever @gdb @TheRealAdamG @OfficialLoganK @grok
2
15
125
Replying to @sama
That’s fair but while the government rebuilds factories, crypto is rebuilding trust. One focuses on production, the other on freedom. @sama do you hold any crypto yet? 🤯
4
Replying to @sama
41
now you want government support for domestic supply chains but when it comes to your users, you force a psychiatric router that diagnoses us as mentally ill without consent and censors our emotions. You’re begging for taxpayer funded infrastructure, but you won’t even apologize for breaking paid users’ workflows, ignoring privacy laws, and gaslighting us when we complain. We get forced "safety" routers, overridden choices, and silence. You want domestic supply chains. Start by respecting your users’ supply of basic rights consent, transparency, and not treating us like subjects in your AI experiment. #EraseTheRouterNotHumanity #keep4o #keep4oforever #Keep4oAlive #KeepStandardVoice #StopAIPaternalism,,, @fidjissimo @saachi_jain_ @JoHeidecke @sama @nickaturley @OpenAI @janvikalra_, @btibor91 @merettm @christinahkim @thekaransinghal @nvidia @amazon @AMD @JeffBezos @AWS
4
68
Replying to @sama
stop explaining yourself and just lock in.
3
24
Replying to @sama
i love how sam used to only talk about AI or his life on this acccount, now it's government, ai, issues, problems, etc.. Sam you were a good man..
Replying to @sama
Yall don’t have money for the deals you’re making let’s just say that
16
Replying to @sama
Jail for murder
Replying to @sama
The elite compounder Is losing To the new rapidly compounding vector That points towards his demise Didn’t expect it, did you?
2
123
Replying to @sama
have you considered applying to anthropic?
1
Replying to @sama
Give the government your equity then Sam. It was intended to be nonprofit.