Woody Says Hello retweeted
Someone recreated “Lose Yourself” by stitching together 187 random movie clips to form the lyrics.
This is a step forward
Humanoid robots at deadmau5 🔥🔥🔥
Woody Says Hello retweeted
OpenAI massive partnership signals a generational shift: $500B stargate deal $100B nvidia deal $100B AMD deal $38B amazon deal $25B intel deal $20B TSMC deal $13B microsoft deal $10B oracle deal we stand at the center of a generational shift in technology
7
11
47
Woody Says Hello retweeted
1 million businesses using the OpenAI API!
we are grateful to the more than 1 million business customers building with us openai.com/index/1-million-b…
3
4
63
Woody Says Hello retweeted
Empire of AI by @_karenhao is by far the most accurate telling of the era when I was at OpenAI, which was an important few years – from the first commercial step to shortly after the launch of ChatGPT. There is one important piece that is incorrect: the portrayal of @sama He’s presented as some machiavellian and reckless leader and the facts don’t support that. I joined OpenAI when we were about 100 people and purely a research lab. As head of product, I helped transition OpenAI from a research org to one deploying our research as products. During this time a number of large and complex decisions were worked through. There were no easy and obvious solutions to any of these and many of these decisions were seemingly at odds with past decisions. Complex situations often look very different to people and there were dynamics at OpenAI during this time that made everything more challenging – from the org’s structure to philosophical belief structures and much in between. The weirdness of OpenAI at this time appealed to me – the unusual structure felt like it created space for something different and the differing beliefs (while exhausting at times) felt necessary for navigating genuinely novel territory. But that same weirdness created real tensions as we worked through three major challenges. First, the Microsoft partnership: how do we take billions from a tech giant without compromising independence and our mission? Second, productization: how do we go from a research lab to shipping products without abandoning our original purpose? Third, deployment: how do we deploy AI research fast enough to matter while being careful enough to be responsible? In the moment, none of these had obvious answers. The right path forward was uncertain, and reasonable people disagreed – often strongly – about what we should do. Led by Sam, we worked through each of these tensions carefully and deliberately. With the fullness of time and the ability to see how things actually played out, I believe the evidence shows we reached the right decisions on all three. When negotiating the early Microsoft deal the entire term sheet was shared with everyone at the org. We’d add questions and comments and then Sam would host an endless meeting where we’d talk through the questions, discuss the spirit of what we cared about, gather feedback on what missed the mark, etc. Each iteration of the term sheet, month after month, progressed like this. Some opposed the partnership, but their voices were always heard and attempts to address their concerns were made. In hindsight, a deal of this sort was required – there was no other viable path – but Sam ensured that our independence and our mission were preserved while spending time working through everyone’s concerns. The first product roadmap spent considerable time articulating why shipping product supported our mission and how we could do so safely. I spent significant time working through my colleagues’ concerns about productization because getting buy-in across the org on the why was essential to doing it right. With Sam’s full support, we consistently slowed down our product work and made decisions that hurt our business and metrics. We refused to allow entire use-cases we felt we couldn’t handle responsibly. We learned what was required – technically and operationally – to comfortably support select use-cases and prioritized that work. We fired some of our biggest customers because we were concerned about misuse. We didn’t get everything right during this era, but we did an excellent job identifying, sizing, and mitigating risk while building one of the most widely-used products in history. This wasn’t luck, it was the result of the deliberate, sometimes frustrating culture Sam insisted we work through. On deployment, many of us believed that deployment was essential to the safety strategy (not separate and something to fear). Learning to deploy the research responsibly would require practice, and the time to practice was when the stakes were lowest. And so we embraced an iterative deployment strategy. While other labs struggled with misuse and PR crises, we consistently deployed without major incidents and we learned and improved with each model release. We all understood that being able to shape the norms and standards of AI was critical to our mission. Sam argued that writing policy memos could only go so far and we’d be in a much stronger position to define norms aligned with our values if we were consistently the first to deploy responsibly. His argument proved more correct than many of us realized at the time. One question I’ve reflected on a lot is why brilliant, well-intentioned people have such different views of this era and Sam’s leadership. I have respect for many who have framed Sam’s leadership negatively, and count many of them as friends, and so it’s somewhat uncomfortable to share my conclusion. Over the years, when I’ve listened to people share examples of what they saw as problematic behavior, I’ve noticed that it often traces back to one of these dynamics: someone who lost an internal debate and attributed it to bad faith rather than legitimate disagreement; someone who struggled to accept that complex situations made previous plans untenable; someone unfamiliar with how large organizations with multiple stakeholders actually function; or someone who pursued power and lost. I don’t say this to dismiss the substance of these perspectives – the concerns about Microsoft, productization, and deployment were real. But I think these underlying dynamics shaped how people interpreted complex, ambiguous situations. When I joined I was told we’d only ever be 200 people. For reasons I understood, we had to abandon this idea. I didn’t feel lied to or misled. I understood we were navigating novel territory where plans had to evolve. Not everyone experienced it that way, and I understand why. But those different experiences don’t mean Sam was acting in bad faith. With several years of distance, I believe the major decisions from that era have held up remarkably well. That doesn't mean we got everything right or that the concerns weren't legitimate – but it does suggest Sam was navigating these tensions with more wisdom than many give him credit for.
This is amazing
Before the labs got IMO Gold, I watched models 10x the state-of-the-art on FrontierMath, the benchmark I planned to last 5+ years. I have no more doubt: AI will radically reshape mathematical research in the coming years. I'm done being a referee. I've joined Principia Labs, a startup with a singular mission: build models that advance frontier mathematical research. I spent the past year at Epoch AI studying the trajectory of math AI. After the preliminary FrontierMath evals, I felt assured my field remained untouchable. Then OpenAI unveiled their new reasoning architecture with a massive leap forward. The other labs quickly caught up, and we had to develop an extra tier of difficulty to FrontierMath just to feel safe it would resist saturation till next year. Competition math and literature search are solved. But entire modes of mathematical discovery remain out of reach, and hillclimbing FrontierMath-style evals hasn't been building those muscles. There is no wall preventing AI from surpassing the best mathematicians, but we're still several breakthroughs away. If you're an AI researcher or mathematician who sees where this is headed, help us build it rather than watch it happen. We're looking for exceptional people to work on deep learning, RL, and formal math. DM me or email elliot@principialabs.org.
According to someone, he intended to delete them when he acquired the social network. However, these promises were empty
The number of bots on this platform is off the hook.
You should teach ordinary people how to use Codex (vibe coding). It would be excellent
We're putting together some eng blog posts about how we use Codex at @openai. Looking forward to sharing them!
Woody Says Hello retweeted
Codex has transformed how OpenAI builds over the last few months. Have some great upcoming models too. Amazing work by the team!
Woody Says Hello retweeted
We were promised “grown up mode” in 2025 60 days left
Woody Says Hello retweeted
What people always conveniently leave out in the latest discussion about Sam is that not only did Ilya regret all the mess, most OpenAI people set up a petition to bring back Sam and Ilya himself signed it as well. I dont know a company where people are that loyal to their CEO
I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
Woody Says Hello retweeted
Not true. Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.
This tweet is unavailable
66
104
57
1,058
i helped turn the thing you left for dead into what should be the largest non-profit ever. you know as well as anyone a structure like what openai has now is required to make that happen.
Woody Says Hello retweeted
Dragon Ball Z × Linkin Park - Numb
Woody Says Hello retweeted
18 months ago people were complaining that AI was not quite at undergraduate level in Maths, nine months ago that it was not quite at a PHD level - we are now seeing it an advanced post-graduate level…. (And people are reassuring themselves that is not yet at a Nobel level)
I crossed an interesting threshold yesterday, which I think many other mathematicians have been crossing recently as well. In the middle of trying to prove a result, I identified a statement that looked true and that would, if true, be useful to me. 1/3
Woody Says Hello retweeted
Just need good excel and ppt capability now.
6
3
1
66
Woody Says Hello retweeted
That tiny teaspoon of honey in your tea is more precious than it seems. To make just one teaspoon, 12 honeybees must work together their entire lives, visiting over 30,000 flowers and flying nearly 800 miles—all while carrying nectar drop by drop back to the hive. Bees are master engineers of nature. They communicate using waggle dances, coordinate massive team foraging missions, and maintain hive temperatures with wing vibrations. But beyond honey, bees play a critical role in the planet—they pollinate 75% of the world’s crops, including fruits, vegetables, and nuts.
Si @OpenAI no hubiera lanzado @ChatGPTapp, seguiría siendo exclusiva de laboratorios privados. Google probablemente la usaría solo para mejorar sus estrategias publicitarias. Gracias, @sama, por compartir esta tecnología.
Replying to @yacineMTB
if i were like, a sports star or an artist or something, and just really cared about doing a great job at my thing, and was up at 5 am practicing free throws or whatever, that would seem pretty normal right? the first part of openai was unbelievably fun; we did what i believe is the most important scientific work of this generation or possibly a much greater time period than that. this current part is less fun but still rewarding. it is extremely painful as you say and often tempting to nope out on any given day, but the chance to really "make a dent in the universe" is more than worth it; most people don't get that chance to such an extent, and i am very grateful. i genuinely believe the work we are doing will be a transformatively positive thing, and if we didn't exist, the world would have gone in a slightly different and probably worse direction. (working hard was always an extremely easy trade until i had a kid, and now an extremely hard trade.) i do wish i had taken equity a long time ago and i think it would have led to far fewer conspiracy theories; people seem very able to understand "ok that dude is doing it because he wants more money" but less so "he just thinks technology is cool and he likes having some ability to influence the evolution of technology and society". it was a crazy tone-deaf thing to try to make the point "i already have enough money". i believe that AGI will be the most important technology humanity has yet built, i am very grateful to get to play an important role in that and work with such great colleagues, and i like having an interesting life.
Woody Says Hello retweeted
Replying to @yacineMTB
if i were like, a sports star or an artist or something, and just really cared about doing a great job at my thing, and was up at 5 am practicing free throws or whatever, that would seem pretty normal right? the first part of openai was unbelievably fun; we did what i believe is the most important scientific work of this generation or possibly a much greater time period than that. this current part is less fun but still rewarding. it is extremely painful as you say and often tempting to nope out on any given day, but the chance to really "make a dent in the universe" is more than worth it; most people don't get that chance to such an extent, and i am very grateful. i genuinely believe the work we are doing will be a transformatively positive thing, and if we didn't exist, the world would have gone in a slightly different and probably worse direction. (working hard was always an extremely easy trade until i had a kid, and now an extremely hard trade.) i do wish i had taken equity a long time ago and i think it would have led to far fewer conspiracy theories; people seem very able to understand "ok that dude is doing it because he wants more money" but less so "he just thinks technology is cool and he likes having some ability to influence the evolution of technology and society". it was a crazy tone-deaf thing to try to make the point "i already have enough money". i believe that AGI will be the most important technology humanity has yet built, i am very grateful to get to play an important role in that and work with such great colleagues, and i like having an interesting life.