Chief AI & Co-founder @AnacondaInc; invented @pyscript_dev, @PyData @Bokeh @Datashader. Former physicist. A student of the human condition. bsky: @wang.social

Austin, tx
Joined August 2007
THE MOMENT YOU'VE BEEN WAITING FOR!! Type "=PY(" into Excel, and start executing Python directly in the @msexcel grid! Really excited about our new partnership with @Microsoft to democratize data science, machine learning, and AI to all knowledge workers!
We’re excited to unveil Python in Excel! Get ready for a whole new way to execute advanced analytics capabilities from within Excel 🐍 + 📊 = 💚 Check out the new integration btwn @anacondainc & @msexcel, @Microsoft365 here 👇 bit.ly/3KSblQ6
6
40
7
189
Peter Wang 🦋 retweeted
defending today 🥲
Peter Wang 🦋 retweeted
state of open-source AI in 2025: - almost all new open American models are finetuned Chinese base models - we don’t know the base models’ training data - we have no idea how to audit or “decompile” base models who knows what could be hidden in the weights of DeepSeek 🤷‍♂️
94
58
20
1,021
Peter Wang 🦋 retweeted
Often haunted by this Danny Kahneman remark from 2018, which I increasingly suspect might be correct and might be obviously correct very soon: Yann LeCun said yesterday that humans would always prefer emotional contact with other humans. That strikes me as probably wrong. It is extremely easy to develop stimuli to which people will respond emotionally.
9
3
4
60
Peter Wang 🦋 retweeted
we have this chart metaphorically pinned to the wall rn. 1 more Fast Agent product soon. sync/async is the wrong framing. you want cheap/commodity/background agents for solved slop problems, Fast Agents for the hard ones where you need to stay under the flow window to keep Human attention span for deep work. while model labs go after multi-day autonomy, agent labs are going after the mind meld with the machine
both cursor and windsurf released models today heavily optimized for speed this is very different than the direction people have been pushing where they kick stuff off to codex for 45min but it's fast feedback loops are always what end up mattering
Peter Wang 🦋 retweeted
as far as im concerned david shor and team are the only real “political scientists” in the world. they do ml work that’s more interesting and careful than many major tech companies, as part of a fascinating technological arms race between the parties
Very excited to announce that we’re hiring for an AI-focused software engineering roles (both general SWE and ML-infra) at Blue Rose! This is a great chance to work with giant novel datasets and cutting edge ML/AI to help defeat Trump in 2026!
Peter Wang 🦋 retweeted
unfortunately I fear I may have underestimated the impact of AI because I find rationalists personally very annoying and I didn't want to listen to them
Peter Wang 🦋 retweeted
Here's OpenAI's CISO with the most detail we've seen yet on prompt injection mitigations for ChatGPT Atlas! It's a pretty long tweet covering a lot of ground so I posted my own point-by-point commentary on my blog: simonwillison.net/2025/Oct/2…
Yesterday we launched ChatGPT Atlas, our new web browser. In Atlas, ChatGPT agent can get things done for you. We’re excited to see how this feature makes work and day-to-day life more efficient and effective for people. ChatGPT agent is powerful and helpful, and designed to be safe, but it can still make (sometimes surprising!) mistakes, like trying to buy the wrong product or forgetting to check-in with you before taking an important action. One emerging risk we are very thoughtfully researching and mitigating is prompt injections, where attackers hide malicious instructions in websites, emails, or other sources, to try to trick the agent into behaving in unintended ways. The objective for attackers can be as simple as trying to bias the agent’s opinion while shopping, or as consequential as an attacker trying to get the agent to fetch and leak private data, such as sensitive information from your email, or credentials. Our long-term goal is that you should be able to trust ChatGPT agent to use your browser, the same way you’d trust your most competent, trustworthy, and security-aware colleague or friend. We’re working hard to achieve that. For this launch, we’ve performed extensive red-teaming, implemented novel model training techniques to reward the model for ignoring malicious instructions, implemented overlapping guardrails and safety measures, and added new systems to detect and block such attacks. However, prompt injection remains a frontier, unsolved security problem, and our adversaries will spend significant time and resources to find ways to make ChatGPT agent fall for these attacks. To protect our users, and to help improve our models against these attacks: 1. We’ve prioritized rapid response systems to help us quickly identify block attack campaigns as we become aware of them. 2. We are also continuing to invest heavily in security, privacy, and safety - including research to improve the robustness of our models, security monitors, infrastructure security controls, and other techniques to help prevent these attacks via defense in depth. 3. We’ve designed Atlas to give you controls to help protect yourself. We have added a feature to allow ChatGPT agent to take action on your behalf, but without access to your credentials called “logged out mode”. We recommend this mode when you don’t need to take action within your accounts. Today, we think “logged in mode” is most appropriate for well-scoped actions on very trusted sites, where the risks of prompt injection are lower. Asking it to add ingredients to a shopping cart is generally safer than a broad or vague request like “review my emails and take whatever actions are needed.” 4. When agent is operating on sensitive sites, we have also implemented a "Watch Mode" that alerts you to the sensitive nature of the site and requires you have the tab active to watch the agent do its work. Agent will pause if you move away from the tab with sensitive information. This ensures you stay aware - and in control - of what agent actions the agent is performing. Over time, we plan to add more features, guardrails, and safety controls to enable ChatGPT agent to work safely and securely across both individual and enterprise workflows. New levels of intelligence and capability require the technology, society, the risk mitigation strategy to co-evolve. And as with computer viruses in the early 2000s, we think it’s important for everyone to understand responsible usage, including thinking about prompt injection attacks, so we can all learn to benefit from this technology safely. We are excited to see how ChatGPT agent will empower your workflows in Atlas, and are resolute in our mission to build the most secure, private, and safe AI technologies for the benefit of all humanity.
Peter Wang 🦋 retweeted
Something BIG is coming… 👀🐇 Next week, ResearchRabbit is leveling up: 📚 Millions of new papers 🧠 Smarter search 🗒️ Note-taking 💡 Premium tier (optional!) Your fav features aren’t going anywhere. ♥️ Explore the beta: researchrabbit.ai/announceme…
2
1
6
Peter Wang 🦋 retweeted
This is legit hilarious: according to the US Secretary of Agriculture herself, Trump's bailout of Argentina means that US taxpayers are now effectively subsidizing Argentinian soybean exports to China - the very market American farmers have been shut out of. All the funnier when you consider that one of the key objectives of the bailout is presumably to prevent Argentina - one of the US's only friendly governments left in South America - from moving closer to China. And instead it's effectively funding the exact opposite outcome. This is according to this leaked text 👇 that Scott Bessent is reading from Agriculture Secretary Brooke Rollins (src: dailymail.co.uk/news/article…), which says: "I'm getting more intel, but this is highly unfortunate. We bailed out Argentina yesterday (Bessent) and in return, the Argentine's removed their export tariffs on grains, reducing their price, and sold a bunch of soybeans to China, at a time when we would normally be selling to China. Soy prices dropping further because of it. This gives China more leverage on us."
Peter Wang 🦋 retweeted
Can we please just have one major AI lab that doesn’t moloch themselves into digital drug dealing please just one
Wired is reporting that OpenAI is preparing to launch a stand-alone social media app for Sora 2. The app is a vertical video feed with swipe-to-scroll navigation, just like TikTok, except the content of this app is 100% AI-generated.
Peter Wang 🦋 retweeted
correct me if im wrong but it seems like: - the theme of the @danwwang book, and the general elite consensus now is that “industrial process” is a technology that lives in the heads of people and that it was a mistake to let so much “low value” industry be offshored due to the loss of tacit process capital - TSMC Arizona which makes the most complex and valuable industrial production in the world was a massive success, producing 4nm chips at great yields, on par with Taiwan, mere years after first striking the ground! this involved a generous federal subsidy, and importing thousands of the great Taiwanese semi experts, despite unions trying to quell Taiwanese immigration and some culture clashes - in the US, acquihires of whole teams with process knowledge in their heads is very common. Zuck acquiring some of the greatest talent from other AI labs for massive numbers is just one example of this; also seen in the full self driving wars btwn uber and google, tesla + apple + big pharma acquires industrial process companies all the time - America is a very capital rich place, able to levy literally hundreds of billions of dollars for machine intelligence capex; we can afford to acquihire whole groups of foreign talent for prices that are unheard of to them in their home countries tldr; acqihiring foreign process knowledge for massive sums should be one of the primary goals of any reindustrialization effort, special visa categories should be made for to scoop up whole teams of Shenzhen’s best, the raids on the LG battery plants in Georgia are the exact opposite of what we need. ability to tolerate new arrivals is a technological edge of American capital to be able to assimilate foreign knowledge into domestic industrial processes at a scale nobody else can countenance
Peter Wang 🦋 retweeted
He’s correct that they underplay value to society… it’s because they don’t want to scare people. Find a cure for cancer for eg, extend life expectancy by 10 years -> John Hancock and other insurers who sold annuities that force them to keep paying while the insured stays alive, go bust. Invent safe autonomous driving -> evaporate 3 million driving jobs, reduce auto demand 50% -> kill 5 million auto and supplier jobs -> save 40,000 lives in auto accidents every year -> decimate the auto insurance sector, the ambulance chasing accident and injury legal sector which is 1% of US GDP. There are many many bets in the economy like this, made on prevailing assumptions that could get changed. Companies will collapse, jobs will be lost, bankruptcies will happen. That’s why no one talks about the value.
Pure insanity. "Each gigawatt of capacity is expected to cost roughly $50 billion, meaning the company is laying the groundwork for at least $1 trillion in infrastructure spending." “I don’t think we’ve figured out yet the final form of what financing for compute looks like,” OpenAI Chief Executive Officer Sam Altman said. “But I assume, like in many other technological revolutions, figuring out the right answer to that will unlock a huge amount of value delivered to society.” All we ever seem to hear from these egomaniacs is how much money they're spending, how big the datacenter structures will be and how many "gigawatts" will be generated. Never a detail about the "amount of value delivered to society." Nothing about products, revenues or earnings. Just build it and they will come?? This may be the equivalent of the one upmanship skyscraper building leading up to the 1929 Crash (and Great Depression). wsj.com/tech/openai-unveils-…
5
3
35
Peter Wang 🦋 retweeted
I just read a research paper on AI humor that completely blew my mind. 🤯 It’s called "Pun Unintended," and it basically proves we've all been tricked. Everyone says AI is getting scarily smart. We see models like GPT-4o write poetry, generate code, and hold conversations. The common belief is that they are starting to truly understand language. We think they get it. But these researchers tried something dead simple. They took a bunch of puns—a basic form of nuanced language. Then, they systematically "broke" them. They'd take a pun like "Long fairy tales have a tendency to dragon (drag on)" and swap the key word, creating nonsense like "Long fairy tales have a tendency to wyvern." Simple, right? The joke is ruined. A human would know instantly. The results were dramatic. On these "broken" puns, the AI's accuracy dropped by a staggering 50%. State-of-the-art models, including GPT-4o, were consistently fooled. They couldn't tell the difference between a clever pun and a sentence that just looked like one. But here's where it gets weird. The researchers found that the AIs were often more confident when they were wrong. They'd see a sentence with a familiar pun structure (like "Old bankers never die, they just...") and immediately classify it as a pun, even if the actual wordplay was gone. It was just recognizing a pattern. An echo. Not the meaning. This made me realize I've been thinking about prompting all wrong. I spend my time trying to give the AI perfect context, assuming it understands the subtleties. But maybe I'm just feeding it statistical patterns it's seen before. I'm not talking to a collaborator; I'm playing a very advanced game of Mad Libs. This has broader implications that are honestly a little scary. If an AI can't reliably tell when a simple joke is broken, should we be trusting it with high-stakes tasks? Think about legal document analysis, medical symptom checkers, or financial modeling. Makes me wonder what other "illusions of understanding" we're currently falling for. Here’s the powerful conclusion that's been stuck in my head: We've been measuring the wrong thing. We thought we were testing for AI comprehension, but we've just been testing its ability to imitate. We've been mistaking a flawless imitation for the real thing this whole time.
Peter Wang 🦋 retweeted
82
296
42
2,812
Peter Wang 🦋 retweeted
I wrote a piece on why MTG's wrong. Here's a better path... jeffgiesea.substack.com/p/ci…
There is nothing left to talk about with the left. They hate us. They assassinated our nice guy who actually talked to them peacefully debating ideas. Then millions on the left celebrated and made clear they want all of us dead. To be honest, I want a peaceful national divorce. Our country is too far gone and too far divided, and it’s no longer safe for any of us. What will come from Charlie Kirk being martyred is already happening. It is a spiritual revival building the kingdom for Christ. But it will happen on the outside, not within the halls of our government. Democrats are hardened in their beliefs and will flip the switch back as soon as they have power. And, if you are expecting Republicans to fight against evil, with the power they currently possess, and end this once and for all, you are going to be extremely disappointed. This week Congress will be voting on another CR - Biden’s budget that FUNDS TRANSGENDER POLICIES, NOT our own Trump policy budget that funds what you voted for. We had 9 months to get it done, but for reasons I don’t understand or agree with, it wasn’t the priority. Government is not answer, God is. Turn your full faith and trust to our Almighty God and our Savior Jesus. Tighten your circle around your family and protect them at all times. I will pray for the left, but personally I want nothing to do with them.
2
4
9
Peter Wang 🦋 retweeted
nearly everything in AI can be understood through the lens of compression - the architecture is just schema for when & how to compress - optimization is a compression *process*, with its own compression level and duration - (architecture + data + optimization) = model - in other words, a model is just a compressed form of a dataset (with some extra choices) - posthoc quantization is a process of compressing a model even further - generalization is a measurement of compression quality - scaling laws are measurements of compression ratio and data size - different datasets have highly variable compression rates (eg text vs images) - inference can be viewed as a model-conditioned prompt decompression
65
67
11
1,007
Peter Wang 🦋 retweeted
As a fully open LLM, Apertus allows researchers, professionals and experienced enthusiasts to build upon the model and adapt it to their specific needs, as well as to inspect any part of the training process. Watch our video: piped.video/watch?v=q8iEzU7A…
4
14