PhD of physics | Surviving humans behind this tech or Surviving AI? |

Joined February 2025
The reason it is so important for everyone to keep pretending that AGI is definitely right around the corner is that there is now over $1T of investment riding on this belief (either already expended, or committed) Current (and recent past) capex cannot be justified by current use cases and technology (currently spending $10-15 to make $1). To ever be in the black you'd need dramatically better tech/applications, and you'd need them fast -- before current datacenters depreciate, which is a 3-5 years timescale
Arguments that AI uses too much water are a gift to AI boosters, as they are generally so easy to refute. AI has real harms. Water use doesn’t seem to be one. Better to focus on actual harms/risks than hope water use is an issue IMO.
Data center water use is almost entirely a non-issue. Arguments to the contrary generally rest on the fact that almost no one has any context for overall daily US water consumption (300+ billion gallons per day), so it’s easy to present a big-sounding number out of context.
Anda retweeted
Weird - not better. Freudian slip.
1
7
Look, if we just generate enough videos of anime tramps walking around aimlessly we'll get to Mars! 🙄
3
3
1
28
Anda retweeted
The time you "get back" is called unemployment.
The average human works approx. 2,000 hours a year. With AI's operating over 30 hours nonstop, I can tell you, we're about to get a bunch of time back.
“Alive in the computational sense” This is the most ridiculous bs they could use for promoting their next nonsense.
🔥 GPT-6 may not just be smarter, it might be alive (in the computational sense). A new research paper called SEAL, Self-Adapting Language Models (arXiv:2506.10943) describes how an AI can continuously learn after deployment, evolving its own internal representations without retraining. Some of the SEAL researchers are now working at OpenAI. 👀 That’s no coincidence. SEAL’s architecture enables models to: •learn from new data in real time •self-repair degraded knowledge •form persistent “memories” across sessions If GPT-6 integrates this, it won’t just use information, it will absorb it. A model that adapts to the world as it changes. A system that gets better every single day. This could be the birth of continuous self-learning AI, the end of the frozen-weights era. Welcome to the next chapter.
5
by using the work of millions of other creators without permission
sora is enabling millions of new creators
4
61
4
312
Anda retweeted
Weird how OpenAI's damage control doesn't actually explain why they tried using an unrelated court case to make a key advocate of a whistleblower & transparency bill (SB53) share all private texts/emails about the bill (some involving former OAI employees) as the bill was debated
There’s quite a lot more to the story than this. As everyone knows, we are actively defending against Elon in a lawsuit where he is trying to damage OpenAI for his own financial benefit. Encode, the organization for which @_NathanCalvin serves as the General Counsel, was one of the first third parties - whose funding has not been fully disclosed - that quickly filed in support of Musk. For a safety policy organization to side with Elon (?), that raises legitimate questions about what is going on. We wanted to know, and still are curious to know, whether Encode is working in collaboration with third parties who have a commercial competitive interest adverse to OpenAI. The stated narrative makes this sound like something it wasn’t. 1/ Subpoenas are to be expected, and it would be surprising if Encode did not get counsel on this from their lawyers. When a third party inserts themselves into active litigation, they are subject to standard legal processes. We issued a subpoena to ensure transparency around their involvement and funding. This is a routine step in litigation, not a separate legal action against Nathan or Encode. 2/ Subpoenas are part of how both sides seek information and gather facts for transparency; they don’t assign fault or carry penalties. Our goal was to understand the full context of why Encode chose to join Elon’s legal challenge. 3/ We’ve also been asking for some time who is funding their efforts connected to both this lawsuit and SB53, since they’ve publicly linked themselves to those initiatives. If they don’t have relevant information, they can simply respond that way. 4/ This is not about opposition to regulation or SB53. We did not oppose SB53; we provided comments for harmonization with other standards. We were also one of the first to sign the EU AIA COP, and still one of a few labs who test with the CAISI and UK AISI. We’ve also been clear with our own staff that they are free to express their takes on regulation, even if they disagree with the company, like during the 1047 debate (see thread below). 5/ We checked with our outside law firm about the deputy visit. The law firm used their standard vendor for service, and it’s quite common for deputies to also work as part-time process servers. We’ve been informed that they called Calvin ahead of time to arrange a time for him to accept service, so it should not have been a surprise. 6/ Our counsel interacted with Nathan’s counsel and by all accounts the exchanges were civil and professional on both sides. Nathan’s counsel denied they had materials in some cases and refused to respond in other cases. Discovery is now closed, and that’s that. For transparency, below is the excerpt from the subpoena that lists all of the requests for production. People can judge for themselves what this was really focused on. Most of our questions still haven’t been answered.
In the age of LLMs, smart people are getting smarter, while dumb people are getting dumber.
It’s not happening.
🚨 It’s happening. You’ll soon be able to send messages directly through ChatGPT. Not just talk to it, talk through it. ChatGPT is quietly becoming the true everything app: AI assistant, search engine, creative studio, code lab, and now… your messenger. From text to voice to video to DMs, all powered by GPT-5. Welcome to the age of AI-native communication.
3
Anda retweeted
I didn’t think I could lose more respect for OpenAI, but today I did, with this self-righteous twisting of facts, rightly disparaged and dissected in the comments.
There’s quite a lot more to the story than this. As everyone knows, we are actively defending against Elon in a lawsuit where he is trying to damage OpenAI for his own financial benefit. Encode, the organization for which @_NathanCalvin serves as the General Counsel, was one of the first third parties - whose funding has not been fully disclosed - that quickly filed in support of Musk. For a safety policy organization to side with Elon (?), that raises legitimate questions about what is going on. We wanted to know, and still are curious to know, whether Encode is working in collaboration with third parties who have a commercial competitive interest adverse to OpenAI. The stated narrative makes this sound like something it wasn’t. 1/ Subpoenas are to be expected, and it would be surprising if Encode did not get counsel on this from their lawyers. When a third party inserts themselves into active litigation, they are subject to standard legal processes. We issued a subpoena to ensure transparency around their involvement and funding. This is a routine step in litigation, not a separate legal action against Nathan or Encode. 2/ Subpoenas are part of how both sides seek information and gather facts for transparency; they don’t assign fault or carry penalties. Our goal was to understand the full context of why Encode chose to join Elon’s legal challenge. 3/ We’ve also been asking for some time who is funding their efforts connected to both this lawsuit and SB53, since they’ve publicly linked themselves to those initiatives. If they don’t have relevant information, they can simply respond that way. 4/ This is not about opposition to regulation or SB53. We did not oppose SB53; we provided comments for harmonization with other standards. We were also one of the first to sign the EU AIA COP, and still one of a few labs who test with the CAISI and UK AISI. We’ve also been clear with our own staff that they are free to express their takes on regulation, even if they disagree with the company, like during the 1047 debate (see thread below). 5/ We checked with our outside law firm about the deputy visit. The law firm used their standard vendor for service, and it’s quite common for deputies to also work as part-time process servers. We’ve been informed that they called Calvin ahead of time to arrange a time for him to accept service, so it should not have been a surprise. 6/ Our counsel interacted with Nathan’s counsel and by all accounts the exchanges were civil and professional on both sides. Nathan’s counsel denied they had materials in some cases and refused to respond in other cases. Discovery is now closed, and that’s that. For transparency, below is the excerpt from the subpoena that lists all of the requests for production. People can judge for themselves what this was really focused on. Most of our questions still haven’t been answered.
Anda retweeted
If AI was improving exponentially, you would not use it to write the same slop that exists everywhere; you'd use it to create new platforms and new operating systems and new game engines and new physics engines that have fewer bugs and higher performance than all existing ones.
Anda retweeted
Most active users of #ChatGPT are on free tier. Each “active” user on free tier has multiple different accounts. #OpenAI reduced the daily usage cap allowed for free tier, and obviously number of created accounts for each free user will go up as a result. Just saying… “Faked”
ChatGPT has hit 800 million weekly active users.
1
11
Ethically? Legally? I'd like to say that ship has sailed but for those of us following this silicon valley silliness since at least 2020, no true ethical commercial models were made from day one. The ones sent over to Hollywood sure aren't. They still got copyright data in em.
Jason Blum says Hollywood needs to 'embrace' AI "it’s here to stay. It’s very important to use it ethically and legally, and for studios and guilds to protect the copyright of the artists. But if we in Hollywood stick our heads in the sand and don’t use it at all, we’re going to cede content creation to other people" "The consumer does not care if what they’re looking at is AI" (via @Variety)
6
56
377
Anda retweeted
MORE OF THIS. Fight this garbage if it shows up in your backyard.
Microsoft has withdrawn its proposal for a data center in a Milwaukee suburbs after community pushback. After opposition from area residents and elected officials the 244 acre Caledonia project will not proceed. jsonline.com/story/money/bus…
8
211
1
1,147
Anda retweeted
OpenAI is NOT too big to fail. For surveillance capacity, they underestimate how big open source AI is gonna get, actually ‘open AI’. That could sink them and pop the bubble. People get local AI completely outside of the hands of corporate overlords. They are screwed without #4o
OpenAI is Now Too Big to Fail OpenAI has quietly become one of the most critical pillars of the U.S. tech economy. Its influence now extends far beyond chatbots, it fuels infrastructure demand, GPU manufacturing, cloud expansion, and even energy development. The company’s partnerships with Microsoft, Oracle, Nvidia, and AMD created a web of dependencies that tie together trillions in market capitalization. Every improvement in OpenAI’s models drives hardware orders, cloud scaling, and productivity tools across industries. What Sam Altman built isn’t just an AI company, it’s an ecosystem that stimulates economic growth, accelerates innovation, and strengthens America’s technological dominance. At this point, OpenAI isn’t just a company. It’s economic infrastructure.
Anda retweeted
I think people need to be a lot more concerned with the rise of an authoritarian government at the same time we've reached a technological state with AI where it's becoming impossible to trust anything you see. When they fully control the technology and narrative, it's over.
Anda retweeted
Is this the abundance we were promised?
5
1
15
Anda retweeted
This doesn't happen when you use: A camera (photo or video)✅ Digital art program ✅ Traditional media ✅ It only happens when you use scam products made by absolute SHITHEADS! 😊
1
1
13
Anda retweeted
I had a conversation with someone in the AI industry this weekend who, when I confronted them on the morality of their actions, basically said this. Just zero agency, placing the blame at the foot of capitalism/inevitability. Bleak!
Those who do bad things want to believe their actions don’t matter because the outcome is inevitable.