Quick overview of HOC / HVM / Bend 's state:
- about 1 year ago, we launched Bend1
- first lang to run closures + fast obj allocator on GPU
- near-ideal speedup up to 10000+ cores
- based on HVM2, a strict runtime for Interaction Nets
Problems:
- interpretation overhead still significant
- full RTX 4090 to beat 1-core OCaml / JavaScript / etc.
- big practical limitations (int24, no IO, no packages)
- despite Python syntax, it was still hard to use
- turns out most devs can't think recursively
- incompatible with lazy evaluation (not β-optimal!!)
I was disappointed by the problems above. At the same time, I was growingly optimistic about the application of optimal evaluation to the problem of program synthesis, which is a cornerstone of Symbolic AI - a failed idea, but with a strong feeling of "I can fix it".
I made a decision: throw HVM2 away (💀) and go back to the HVM1 roots, which was based on my "Interaction Calculus", and featured β-optimality. I heavily polished and improved it, resulting on HVM3, a prototype written in Haskell. I then used it to understand and research program synthesis on optimal evaluators. This was HARD, and cost about a year of my life, but results were positive, and our system now beats all published alternatives in efficiency and capabilities.
Now, we're taking all that and solidifying it, by implementing the runtime / compiler in raw C, so it can run as efficiently as possible on our humble Mac Mini cluster (🥹), and serve it to the world via an API.
I expected to launch by October, but there are still some challenges that cost me more time than I anticipated. For one, finding Lean proofs with SupGen requires very careful handling of superpositions, and doing that on C is actually HARD AS HELL - but things are moving steadily and we have a lot done already, and I still expect to launch Bend2 / HVM4 this year or Q1 2026.
Bend2 will have:
- parallel CPU runtime with lazy/optimal mode (!!!)
- 16 / 32 / 64 bit ints, uints and floats (finally)
- arbitrary IO via lightweight C interop (like Zig!)
- no CUDA yet, due to lack of time, very doable though
- most importantly: SupGen integration
SupGen is something new and the main novelty behind Bend2. It is *not* a traditional AI, it is a whole new thing capable of generating code based on examples and specs. I think many (in special, these in deep learning) will be caught totally off guard by how much we can accomplish with pure symbolic search, and, more than anything else, I can't wait to watch that reaction