Okay here's the first thing I did with THRML by @extropic
It's just a basic sudoku solver. Thermodynamic computing is a bit overkill for this task but I think since humans can actually do sudoku, it's a good intuition for what's going on under the hood.
With sudoku, there are many overlapping constraints. You start with a partially filled puzzle, which are the initial conditions, but then other rules are: no duplicates on any row, column, or square.
Now, with a sudoku problem, you know there is ONE singular solution, or a "low energy state" i.e. where there are no rule violations or collisions.
So then what you do is you program those "clamped" initial values into the TSU, and you bake in the rules (no duplicates) and then, due to the laws of thermodynamics and electricity... it just sort of settles into the correct solution (this is "annealing")
The reason I think this is such a good example of what TSUs do is because for humans (and classical computers) it's more or less a "guess and check" process. No matter what method you use with classical computation or human computation, it's an iterative refinement process of sequential steps.
But, with sudoku, as you can see in the output below, it's a single step. That's because the TSU looks at the whole problem globally.
Here's how I did this: ChatGPT PRO 🤣
No joke, ChatGPT pro one-shotted this entire problem. There were several refinements we made, though it was mostly around UI and validation (not the core logic). However, we did do an optimization step to make sure we were using the correct block batching from the THRML library.
Nov 4, 2025 · 11:55 AM UTC
Now, here's the second thing I did, which is similar-but-different. It's called the 8 queens problem. The rules are simple: place 8 queens on a chess board such that none of them can attack each other. Simple to say, hard to execute.
There are 92 solutions to this problem, and so I wrote a script to test if THRML/TSU would get stuck on a single solution. Here's the trick: there are billions of possible board states, but the 92 solutions all have the same energy level: 0. So how would a TSU handle that? In reality, there are rarely going to be multiple solutions that are TOTALLY equal in loss/energy.
So, I fired up this script that baked in the rules (with no starting conditions) and watched as the TSU quickly iterated through, discovering the solutions. It took over 400 iterations to get all 92 solutions due to the massive amount of duplicates it generated. But it did ultimately generate 92 unique solutions.
There are a few things that are remarkable about this whole process:
FIRST, ChatGPT has never seen the THRML library. Yet it was able to read the docs, look at the examples, and code it up ONE SHOT. The 8 queens puzzle required no refinements or fixes. It just... worked...
SECOND, since the 8 queens is a known problem space with a known solution set, it was an ideal benchmark to test THRML against (maybe @beffjezos will send me a TSU!) it will be interesting to see how the real hardware fares. But considering it makes use of natural noise, I figure it will probably perform similarly.
THIRD, there's probably something mathematical and interesting going on around the fact that it took about 400 iterations to discover all 92 solutions. But I am not smart enough to figure that out.
FOURTH, it will be far more interesting, I think, to point TSUs at totally novel problems where we do NOT have a complete solution set
Next, I wanted to riff on where I think this is going. The first, and I think most obvious direction, is protein folding and protein design. With proteins, you're trying to find the lowest global energy state of a complex molecule. You can brute force this with old methods like FOLDING@HOME and you can also train ginormous models like AlphaFold to use neural networks to approximate the shapes (albeit very accurately and quickly).
OR
You can use thermodynamic computing to rapidly design/discover the optimal shape for any given protein. Not only that, it can be done on the SAME hardware that solved sudoku and the 8 queens problem. And that, I think, underscores the key difference. The TSU is a general purpose accelerator. It's not onlike how a TPU or GPU accelerates a particular kind of math.
The TSU accelerates other kinds of math, and it's all based on noise. True measurements of entropy. Not pseudo-random number generators. AND because the way the pbits interact, you can approximate a true Boltzmann distribution (imagine billiard balls ricocheting off each other, but without the simulation)
To put this into another metaphor, this is like solving sudoku by simply gluing some 9 sided dice in place, and shaking the board with a bunch of other 9 sided dice until they settle into the correct values. Even doing this would be a chance of like one in billions or trillions of working. But because of the physics of the TSU, the correct configuration is the default result. It's still blowing my mind.
Now, my final take is perhaps the most "out there" but hear me out.
I think that TSUs could help control nuclear fusion.
Creating a magnetic containment field for nuclear fusion is a complex problem, not unlike the interlocking variables that go into sudoku, 8 queens, and protein folding.
The difference is it's very high speed. BUT at any given moment, there is an "optimal" field shape for nuclear fusion.
My thought is that a TSU is likely the best bet for controlling plasma, and could dramatically increase the efficiency and output of nuclear fusion reactors.
This is why I keep saying YOU ARE NOT EXCITED ENOUGH ABOUT THIS





