Empire of AI by
@_karenhao is by far the most accurate telling of the era when I was at OpenAI, which was an important few years – from the first commercial step to shortly after the launch of ChatGPT.
There is one important piece that is incorrect: the portrayal of
@sama
He’s presented as some machiavellian and reckless leader and the facts don’t support that.
I joined OpenAI when we were about 100 people and purely a research lab. As head of product, I helped transition OpenAI from a research org to one deploying our research as products. During this time a number of large and complex decisions were worked through. There were no easy and obvious solutions to any of these and many of these decisions were seemingly at odds with past decisions.
Complex situations often look very different to people and there were dynamics at OpenAI during this time that made everything more challenging – from the org’s structure to philosophical belief structures and much in between.
The weirdness of OpenAI at this time appealed to me – the unusual structure felt like it created space for something different and the differing beliefs (while exhausting at times) felt necessary for navigating genuinely novel territory.
But that same weirdness created real tensions as we worked through three major challenges. First, the Microsoft partnership: how do we take billions from a tech giant without compromising independence and our mission? Second, productization: how do we go from a research lab to shipping products without abandoning our original purpose? Third, deployment: how do we deploy AI research fast enough to matter while being careful enough to be responsible?
In the moment, none of these had obvious answers. The right path forward was uncertain, and reasonable people disagreed – often strongly – about what we should do. Led by Sam, we worked through each of these tensions carefully and deliberately. With the fullness of time and the ability to see how things actually played out, I believe the evidence shows we reached the right decisions on all three.
When negotiating the early Microsoft deal the entire term sheet was shared with everyone at the org. We’d add questions and comments and then Sam would host an endless meeting where we’d talk through the questions, discuss the spirit of what we cared about, gather feedback on what missed the mark, etc. Each iteration of the term sheet, month after month, progressed like this. Some opposed the partnership, but their voices were always heard and attempts to address their concerns were made. In hindsight, a deal of this sort was required – there was no other viable path – but Sam ensured that our independence and our mission were preserved while spending time working through everyone’s concerns.
The first product roadmap spent considerable time articulating why shipping product supported our mission and how we could do so safely. I spent significant time working through my colleagues’ concerns about productization because getting buy-in across the org on the why was essential to doing it right. With Sam’s full support, we consistently slowed down our product work and made decisions that hurt our business and metrics. We refused to allow entire use-cases we felt we couldn’t handle responsibly. We learned what was required – technically and operationally – to comfortably support select use-cases and prioritized that work. We fired some of our biggest customers because we were concerned about misuse. We didn’t get everything right during this era, but we did an excellent job identifying, sizing, and mitigating risk while building one of the most widely-used products in history. This wasn’t luck, it was the result of the deliberate, sometimes frustrating culture Sam insisted we work through.
On deployment, many of us believed that deployment was essential to the safety strategy (not separate and something to fear). Learning to deploy the research responsibly would require practice, and the time to practice was when the stakes were lowest. And so we embraced an iterative deployment strategy. While other labs struggled with misuse and PR crises, we consistently deployed without major incidents and we learned and improved with each model release. We all understood that being able to shape the norms and standards of AI was critical to our mission. Sam argued that writing policy memos could only go so far and we’d be in a much stronger position to define norms aligned with our values if we were consistently the first to deploy responsibly. His argument proved more correct than many of us realized at the time.
One question I’ve reflected on a lot is why brilliant, well-intentioned people have such different views of this era and Sam’s leadership. I have respect for many who have framed Sam’s leadership negatively, and count many of them as friends, and so it’s somewhat uncomfortable to share my conclusion. Over the years, when I’ve listened to people share examples of what they saw as problematic behavior, I’ve noticed that it often traces back to one of these dynamics: someone who lost an internal debate and attributed it to bad faith rather than legitimate disagreement; someone who struggled to accept that complex situations made previous plans untenable; someone unfamiliar with how large organizations with multiple stakeholders actually function; or someone who pursued power and lost.
I don’t say this to dismiss the substance of these perspectives – the concerns about Microsoft, productization, and deployment were real. But I think these underlying dynamics shaped how people interpreted complex, ambiguous situations. When I joined I was told we’d only ever be 200 people. For reasons I understood, we had to abandon this idea. I didn’t feel lied to or misled. I understood we were navigating novel territory where plans had to evolve. Not everyone experienced it that way, and I understand why. But those different experiences don’t mean Sam was acting in bad faith. With several years of distance, I believe the major decisions from that era have held up remarkably well. That doesn't mean we got everything right or that the concerns weren't legitimate – but it does suggest Sam was navigating these tensions with more wisdom than many give him credit for.