AI coding today is bottlenecked by LLM speed, intelligence, and cost – all seeing regular 10x improvements
Which begs the question: Where is this all going? What is a positive vision for the future of coding with AI?
In a positive AI coding experience:
1. I understand more about the problem space & my code – not less
2. I am able to express my taste more & make more choices in implementation – not less
3. I am in a flow state, immersed in the creation – not waiting or constantly switching contexts
On the AI improvement side, let's make a couple 1000x assumptions that seem a decade or two out:
a. throughput = 1m tokens / sec
b. intelligence = smartest human level
c. cost = 1 cent / 1b tokens
A corollary of this is that as the cost drops, we'll use more and more inference in parallel. So for the sake of imagination, picture:
i. 100 of the smartest people you know
ii. reading and typing 1000x faster than you
iii. hooked up to vm sandboxes (so they can run experimental code safely)
It's immediately obvious that the bottleneck is getting information to and from the human to this army of AIs.
It's an I/O problem!
Point (1) above is about me wanting to get data from the AIs and point (2) is about me wanting to get data into AIs.
Which begs the next question: is the I/O input-bound or output-bound? Are we trying to get more data from the AIs or into the AIs?
I believe output is the bottleneck: How fast we can get data from the AIs into the human brain. The bottleneck is how fast a human can understand and learn.
This is very profound and counter-intuitive, so let me say it again
HUMAN LEARNING IS THE BOTTLENECK
Why? Because we want to maximize the expression of human taste, but human taste is limited by what the human knows. It must be the job of the AIs to inform the human such that he can make good choices.
(This rhymes with HARC's vision of ensuring "human wisdom exceeds human power")
Let me evoke this with an approximate conversation I had with a non-technical person:
Friend: "I want to build a social network"
Me: "Ok... why?"
Friend: "Because when I started college it was really hard to meet people, find clubs, find a good roommate."
Me: "Are you telling me you want to start a social network for college kids?"
Friend: "Yeah, why?"
Me: "No reason. Ok so what about facebook didn't work for you?"
Friend: *lists random problems with facebook*
Me: "Ok sure. What are your plans for adoption? How will you get folks onto your network?"
Friend: "Campus ambassadors, marketing stunts, a better product."
Me: "Hmmm, sounds really hard. Have you read The Lean Startup, any Paul Graham, or any Nikita Bier on how to start and grow a social app?"
Friend: "Nope"
Me: "Ok, I'd start there"
Friend: "Nope, I just want you to find me a code monkey to build it"
Me: "Nobody will build this for you. You should learn to code so you can build this yourself."
Friend: "No, I'm the ideas guy. I'm a coder"
*friend drops out of college, raise a couple million, gives it to code monkeys to the expected result*
I think you get the point. The bottleneck here is not helping this person make a social network for college students. The bottleneck is them learning more about startups, technology, etc, being education enough even to have good goals.
Let me give a thought experiment. What if I told you that everybody that worked at Google was actually an AI? The only human in the whole company is Sundar. Now Sundar is going to retire and he is tapping you to be the new CEO and only human at Google. Congrats on your new job!
Now as the new CEO, are you going to be like my over-confident friend and start telling your AI army to start making random apps that you think are good ideas? No! You're smarter than that. You know that your ability to be a good CEO of Google is bounded by your ability to UNDERSTAND as much as possible about how Google works. Then and only then, when you have a clear-eyed view of things, can you even begin to have a hope of expressing your human taste in a way that will have a positive impact
The problem of AI coding reduces to the problem of teaching humans as fast as possible
This insight is completely against the grain of vibe coding. Most of the fervor around AI coding seems centered around non-technical people celebrating that they never have to learn anything new in order to do technical things. It's all about skimming on the surface with the least understanding possible to get the most done.
But why, you might ask, do humans need to understand stuff if the AI is super smart? For the same reason that we want the CEO google to be as smart as possible! The details matter. If a human can't express their judgement in an informed way, then they're better off not expressing their judgement at all.
Let me give you concrete examples that I've seen over and over again with non-technical vibe coders I've encountered:
They do not understand the basics of HTTP, and client, server, database architecture
They do not understand the basics of debugging and the importance of a reproducible example
People who don't understand these things (yet), shouldn't be making decisions about apps. They should be learning those things, so that they are the kinds of people who can make apps.
I'm not trying to be elitist. I'm saying there are levels to things. I have no business flying a plane or building a plane or anywhere near the hood of a car. Yes, I can spend money and get someone (pretend they're an AI if it helps) to do those things for me (call that "vibe flying" or "vibe fixing a car") but I am not doing those things.
If you don't understand what you're doing, you're not doing it. You're commissioning it. Vibe coders are commissioners, like my non-technical friend, who wasted millions on getting code monkeys to build his dumb social network. The limiting factor is not what they can get built, but their ability to know what to build. This is why the world is filled with garbage AI apps with zero users, not lots of new vibe coded app businesses.
If I haven't convinced you, you may as well stop reading. If I have convinced you, great. Let's talk about:
How does one get data into the human brain as fast as possible?
Obviously, we could talk about brain-machine-interfaces, neuralink, yada yada, but for the purposes of this rant, let's stick to the five senses we know and love. What can I say? I'm just old-school that way.
So what's the most educational 5 minutes or one hour you could imagine? Think back to a time you learned something very quickly. What were you doing?
I learn a ton by reading. The Lean Startup, PG essays, etc, taught me a ton about how how to know what to build. The sad truth (for an educator) is that the pathways to greater understanding have been open to all ever since public libraries. Google, Wikipedia, and now AI, only make this story better. Yet people resist learning. I think only 10% of the people I recommend the Lean Startup or The Mom Test to read them. This is going to be one of the key problems to solve if we truly want to empower people with AI: somehow convince them to learn. Somehow make learning fun.
Most of what I know about actually building comes from getting my hands dirty in the details and actually building. Ironically, current AI coding tools isolates its users from this process, which is entirely counter-productive given the goal is deeper understanding
Let's examine one example topic in particular. I've spent a lot of time teaching people about client, server architecture, APIs, and servers. You can't just read about these concepts, you have to play with them. You can't learn surfing without getting in the water.
You have to make some API calls. You have to write some server responses. You have to do some debugging. It takes hours and hours for these concepts to sink it. And there's no way to skip this part. The human ability to learn surfing has been entirely unaffected by the rise in AI, and the same is true for learning how to about client/server architecture.
Not to pitch my company, but this is the problem Val Town is trying to solve: we get folks into a running programming environment so they can immediately start playing with running code. Come on in, the water's fine:
val.town
How can AI help? Maybe it can find a way to make learning more fun or engaging? Maybe it can paper over the unnecessary annoying parts? Maybe it can help design amazing curriculum? Maybe it can be the world's best Socratic teacher?
I don't know the answers, but I think those are the right questions.