If you try to dodge criticism with an appeal to authority, be prepared for this to happen:
Today's Extropic launch raises some new red flags. I started following this company when they refused to explain the input/output spec of what they're building, leaving us waiting to get clarification.) Here are 3 red flags from today: 1. From extropic.ai/writing/inside-x… "Generative AI is Sampling. All generative AI algorithms are essentially procedures for sampling from probability distributions. Training a generative AI model corresponds to inferring the probability distribution that underlies some training data, and running inference corresponds to generating samples from the learned distribution. Because TSUs sample, they can run generative AI algorithms natively." This is a highly misleading claim about the algorithms that power the most useful modern AIs, on the same level of gaslighting as calling the human brain a thermodynamic computer. IIUC, as far as anyone knows, the majority of AI computation work doesn't match the kind of input/output that you can feed into Extropic's chip. The page says: "The next challenge is to figure out how to combine these primitives in a way that allows for capabilities to be scaled up to something comparable to today’s LLMs. To do this, we will need to build very large TSUs, and invent new algorithms that can consume an arbitrary amount of probabilistic computing resources." Do you really need to build large TSUs to research if it's possible for LLM-like applications to benefit from this hardware? I would've thought it'd be worth spending a couple $million on investigating that question via a combination of theory and modern cloud supercomputing hardware, instead spending over $30M on building hardware that might be a bridge to nowhere. Their own documentation for their THRML (their open-source library) says: "THRML provides GPU‑accelerated tools for block sampling on sparse, heterogeneous graphs, making it a natural place to prototype today and experiment with future Extropic hardware." You're saying you lack a way your hardware primitives could *in principle* be applied toward useful applications of some kind, and you created this library to help do that kind of research using today's GPUs… Why would you not just release the Python library earlier (THRML), do the bottlenecking research you said needs to be done earlier, and engage the community to help get you an answer to this key question by now? Why were you waiting all this time to first launch this extremely niche tiny-scale hardware prototype to come forward explaining this make-or-break bottleneck, and only publicize your search for potential partners who have some kind of relevant "probabilistic workloads" now, when the cost of not doing so was $30M and 18 months? 2. From extropic.ai/writing/tsu-101-…: "We developed a model of our TSU architecture and used it to estimate how much energy it would take to run the denoising process shown in the above animation. What we found is that DTMs running on TSUs can be about 10,000x more energy efficient than standard image generation algorithms on GPUs." I'm already seeing people on Twitter hyping the 10,000x claim. But for anyone who's followed the decades-long saga of quantum computing companies claiming to achieve "quantum supremacy" with similar kinds of hype figures, you know how much care needs to go into defining that kind of benchmark. In practice, it tends to be extremely hard to point to situations where a classical computing approach *isn't* much faster than the claimed "10,000x faster thermodynamic computing" approach. The Extropic team knows this, but opted not to elaborate on the kind of conditions that could reproduce this hype benchmark that they wanted to see go viral. 3. The terminology they're using has been switched to "probabilistic computer": "We designed the world’s first scalable probabilistic computer." Until today, they were using "thermodynamic computer" as their term, and claimed in writing that "the brain is a thermodynamic computer". One could give them the benefit of the doubt for pivoting their terminology. It's just that they were always talking nonsense about the brain being a "thermodynamic computer" (in my view the brain is neither that nor a "quantum computer"; it's very much a neural net algorithm running on a classical computer architecture). And this sudden terminology pivot is consistent with them having been talking nonsense on that front. Now for the positives: * Some hardware actually got built! * They explain how its input/output potentially has an application in denoising, though as mentioned, are vague on the details of the supposed "10,000x thermodynamic supremacy" they achieved on this front. Overall: This is about what I expected when I first started asking for the input output 18 months ago. They had a legitimately cool idea for a piece of hardware, but didn't have a plan for making it useful, but had some vague beginnings of some theoretical research that had a chance to make it useful. They seem to have made respectable progress getting the hardware into production (the amount that $30M buys you), and seemingly less progress finding reasons why this particular hardware, even after 10 generations of successor refinements, is going to be of use to anyone. Going forward, instead of responding to questions about your device's input/output by "mogging" people and saying it's a company secret, and tweeting hyperstitions about your thermodynamic god, I'd recommend being more open about the seemingly giant life-or-death question that the tech community might actually be interested in helping you answer: whether someone can write a Python program in your simulator with stronger evidence that some kind of useful "thermodynamic supremacy" with your hardware concept can ever be a thing.

Oct 31, 2025 · 12:21 AM UTC

56
44
20
1,035
1. Make fun of me all year for asking what's the input/output 2. Finally release a research paper 3. Oh hey, just knowing the input/output tells us the chip is fundamentally too limited to be useful (unless a very unexpected research breakthrough happens):
11) Another problem is that you can’t just write an arbitrary formula for energy and sample from the corresponding distribution. With Extropic chips your energy can only be a quadratic function, which severely limits the types of models you can run
3
2
99
Replying to @liron
Shut up Liron
1
5
But I have one more thought on the subject…
4
Replying to @liron
Limon, I dislike your vibe and constant negativity. So it pains me to say that I 100% agree with you on this one. That is some bullshit from the CTO. If any engineer can’t explain something they invented in simple, straightforward terms, then they are a shitty engineer.
1
1
I have a lot of positive tweets too lol
Climate change is stupidly EASY to solve. This company @MakeSunsets can do it by launching $1-2B/yr worth of sulfur-dioxide-filled balloons into the stratosphere to reflect sunlight & cool the planet. No regulations. No treaties. No lifestyle changes. SO₂ is all you need 🙌 Volcanos already do this. Mount Pinatubo successfully cooled Earth by 0.5°C for a year in 1991 by ejecting SO₂ into the stratosphere. The physics is solid, the cost is trivial, and the coordination problem is nonexistent. Why haven't we already exploited this one weird trick? It's a no-brainer. But people are squeamish about “playing God” with the atmosphere, even while we’re building superintelligent AI. Environmentalists would rather scold you into turning off your lights than support a solution that actually works. -- My conversation with Andrew Song, cofounder of Make Sunsets, changed how I think about climate change. I went from viewing it as this intractable coordination problem, to realizing we have a cheap and simple solution ready to go. People just need to stop LARPing as if it’s hard! If you care about orders of magnitude, this episode will blow your mind. And if you feel guilty about your carbon footprint, you'll learn how you can offset an entire year of typical American energy usage for about 15 cents. Yes, cents. We cover: ◻️ Andrew's background & motivation ◻️ Why the company is called “Make Sunsets” ◻️ What's Your P(Doom)™ from climate change ◻️ Geoengineering and solar radiation management ◻️ The SO₂ dial we can turn ◻️ Where to get SO₂ ◻️ Cost calculation: Just $1-2B/yr ◻️ Counterarguments: Termination shock, moral hazard ◻️ Being an energy hog is totally fine ◻️ “The stupidest problem humanity has ever created” ◻️ Offsetting all the CO₂ from OpenAI's Stargate ◻️ Why playing God is good I did this interview because I think Make Sunsets is an important company to learn about & donate to. I highly recommend watching or listening!
1
Replying to @liron
"You clearly don't know anything about <x>. That's ok; it's a graduate-level topic, and I wouldn't expect you to." is copypasta history in the making. This is right up there with that navy seal blurb from back in the day.
4
2
141
Replying to @liron
Extropic tried to forcefully marry their hardware to the current hot thing, AI. This backfired because the current algos don't benefit that much from their hardware. However, I do believe there is potential here. New algos will need to be developed but having this kind of chip could prove to be quite useful. Specially for fields like bio
2
1
11
You’re describing extremely early-stage research. I agree that such research has a (small) chance of somehow being valuable. It’s just that they showered $40M on a relatively unpromising example compared to other projects that could use just $500k.
2
11
Replying to @liron
I also dabble in this space. There are not too many of us, and I don’t want to be doxxed. This is not an insignificant development in generating random numbers. It is a fairly insignificant development for all the downstream calculations that generally require more resources
1
9
That’s consistent with what I wrote
2
7
Replying to @liron
You are literally doing the exact same thing here lmao
2
6
He started it, I ended it
2
38
Replying to @liron
Commendable 4D chess. Doomers feel a lot of pressure (civilization level) in making sure that however fast the timeline is going on this tech, more fuel doesn’t get added, or extinguish any further sparks. Being a naysayer is effective & has some plausible deniability
1
4
There are plenty of other companies accelerating the timeline where all I can do is shake my fist and lobby the government. In the specific case of Extropic, I can make fun of how people are lining up to praise a longshot research idea that should currently have a low valuation.
10
Replying to @liron
Yeah I would stop engaging at this point. Nothing good is going to come out of it either way
1
3
I just have one more tweet after this
7
Replying to @liron
Watch again that "debate" between chubby and Connor Leahy, boy, Mr. Thermo got savaged !
1
1
Good times
In a revealing clip from today's debate, Guillaume Verdon (@BasedBeffJezos) gets asked by Connor Leahy (@NPCollapse) whether the United States should make the F-16's blueprints open source. Guillaume's position appears to be: Access to arbitrarily destructive weapons shouldn't be restricted by a central government, as long as we have a central government that has exclusive access to more destructive weapons. If I'm understanding Guillaume correctly, he's endorsing a policy of allowing any local militia to purchase a 15-kiloton atom bomb like the one that wiped out Hiroshima because, after all, the U.S. government has since built an arsenal of thermonuclear bombs that are each as destructive as 1,000 Hiroshimas. …Which means the e/acc position doesn't pass a basic sanity check. NOTE: Connor knows that any discussion around “who should get to control the superintelligent AGI” is likely moot; we'll probably die at the hands of a rogue uncontrollable AI. But when evaluating non-doomers' arguments, it's useful to first test whether they even understand what policy we need to stay alive in a world where AGI isn't uncontrollable, but is merely a very powerful weapon that humans can aim (and works better as a weapon than a shield).
4
Replying to @liron
As much as I hate to agree with you @liron, on Extropic I think you're spot on. The whole thing reeks of snake oil.
1
1
The most successful schemes are a fat donut of BS around a small core of original legitimacy - Madoff and Enron around markets, FTX around a trading platform, Theranos around lab automation and miniaturization, etc.
Replying to @liron
Any company that disses an honest question by putting down the questioner is crap in my view. Also, if you can’t explain what your product does and why it matters to an average intelligent person, then you’re peddling snake oil.
2
15
Replying to @liron
They had actually won me over for a moment and I was planning to read their paper. But then that guy responded that way to you and yeah - no amount of gate keeping is going to make what clearly needs community / developer support into a thing. For what it’s worth, I have always thought that there’s a lot of untapped potential in the inherent noise present in semiconductors. Finding a way to harness that, shape distributions and leverage that for probabilistic calculations would be super interesting. So it’s kind of a shame.
2
Replying to @liron
I don't know why, but he does look like the kind of guy who would double down on a scam for years and years of his life.
1
Replying to @liron
Esp on X
1
Replying to @liron
I still don't know what that company is doing
1
Replying to @liron
@galic_ivo and I met @DavidDuvenaud at the AI engineer in SF early this year, he knows his shit
1
Replying to @liron
Also, lol^max
19
Replying to @liron
Supporting you against Beff like im supporting the Viet Cong against the khmer rouge
10
Replying to @liron
Falling for this thermodynamics meme is a true IQ test
3
Replying to @liron
Trevor must be a charming character
3
Replying to @liron
BTW, this thing has theranos written all over it.
3
Replying to @liron
Hahah the authority appeals back
2
Replying to @liron
Wow David came in clutch, what a G
2
Replying to @liron
That’s actually the most insane response he could’ve offered
2