○●

Seattle, WA
Joined November 2021
Lush retweeted
Technological innovation can be a form of participation in the divine act of creation. It carries an ethical and spiritual weight, for every design choice expresses a vision of humanity. The Church therefore calls all builders of #AI to cultivate moral discernment as a fundamental part of their work—to develop systems that reflect justice, solidarity, and a genuine reverence for life.
Lush retweeted
awesome write up on what it's like to work at Cursor with a special shoutout to Fausto for keeping us fed every day
I spent the last 60 days working at Cursor. It's been one of the most thrilling phases of my professional life. There's a lot of mystique around the company. Over the last two months, some things matched my expectations; many did not. I wrote an essay for @joincolossus about things that have surprised me about the company and its culture so far. joincolossus.com/article/ins…
10
2
2
222
Ok, now I see why Anthropic is giving $1,000 free of credits for using Claude Web :) It's unusable! — but well, happy to support the effort. Hopefully my issues are being reflected in plenty of logs.
1
1
Interesting. @AnthropicAI just changed the Claude Code Docs experience. Dark theme and no longer mixed up with Agent SDK and API. I was just hoping this would happen because it was all very confusing. Good!
🤍
Xpeng’s humanoid robot is giving me a tour of its HQ experience right now. Staff said there’s zero teleportation.
Lush retweeted
Replying to @AkiyoshiKitaoka
What did you do to my phone, Kitaoka-San! 🤓
1
1
"I'll help you find someone who wants to buy your shares, Brad"
This is ChatGPT's profoundly broken safety system at work keeping users safe from (*check notes*) normal questions about a science fact. Future lawyers looking for evidence of negligence on behalf of OpenAI in protecting users, understand that this right here is how profoundly broken their Safety System is. There's no way it is correctly identifying people who are actually in crisis. (image source: teddit.net/r/ChatGPT/comment…)
if you value intelligence above all other human qualities, you’re gonna have a bad time
LOL
Guess I couldn’t hide it any longer… 🤖😅
The fact that D'Angelo is still on the Board is absolutely fascinating to me.
Replying to @sama
A tale in 52 pages
1
? You've always spent only 10% of your time coding, and the other 90% of time programming. That's still what you are doing.
So it's acceptable nowadays to say that I spend 10% of my coding time actually coding, and the other 90% just prompting the AI, right? Right?
1
Lush retweeted
How can OpenAI with $13 billion in revenues make $1.4 trillion of spend commitments? (Source: @BG2Pod ) Sam Altman: “First of all. We’re doing well more revenue than that. Second of all, Brad, if you want to sell your shares, I'll find you a buyer. I just, enough. I think there's a lot of people who would love to buy OpenAI shares. I think people who talk with a lot of breathless concern about our compute stuff or whatever, that would be thrilled to buy shares. So I think we could sell your shares or anybody else's to some of the people who are making the most noise on Twitter about this very quickly. We do plan for revenue to grow steeply. Revenue is growing steeply. We are taking a forward bet that it's going to continue to grow and that not only will ChatGPT keep growing, but we will be able to become one of the important AI clouds, that our consumer device business will be a significant and important thing, that AI that can automate science will create huge value. There are not many times that I want to be a public company, but one of the rare times it's appealing is when those people are writing these ridiculous OpenAI is about to go out of business. I would love to tell them they could just short the stock, and I would love to see them get burned on that. But we carefully plan. We understand where the technology, where the capability is going to grow and how the products we can build around that and the revenue we can generate. We might screw it up. This is the bet that we're making and we're taking a risk along with that. A certain risk is if we don't have the compute, we will not be able to generate the revenue or make the models at this kind of scale.” Satya Nadella: “And let me just say one thing as both a partner and an investor. There is not been a single business plan that I've seen from OpenAI that they've put in and not beaten it. So in some sense, this is the one place where in terms of their growth and just even the business, it's been unbelievable execution, quite frankly. I mean, obviously, OpenAI, everyone talks about all the success and the usage and what have you. But even I'd say all up, the business execution has been just pretty unbelievable.”
Why AI is Underhyped and Isn't a Bubble Yet Here's why AI isn't a bubble today. I've distilled the data points and ideas from my previous columns and coverage. If you prefer facts and evidence-based reality over vibes and conflated narratives, enjoy! -Big Tech valuations are reasonable and leverage is low. We're at the beginning of multiple AI super product cycles in the year ahead -We are in the early innings of a technology computing shift to AI, the largest in decades. Think 1994 versus 1999. -Every credible source reports overwhelming demand for AI computing capacity. Where is the overcapacity glut? Nowhere. Valuations -During the dotcom bubble, top technology stocks like Microsoft and Cisco traded at 70 to 100 times forward P/E in December 1999 (remember these stocks went significantly higher in Q1 2000 too) versus today's top stocks (Nvidia etc.) trading at 30x with materially better underlying fundamental growth prospects. -The positive cycle sentiment remains in early stages. The Netscape IPO occurred in 1995, five years before the cycle peak. Is there any doubt the eventual OpenAI IPO will be a Netscape-type success? AI is only two to three years in. Think 1994 versus 1999. -The major buyers of AI infrastructure are the most profitable businesses in history, each generating roughly $100 billion in annual net profit. This is nothing like the debt-driven fiber optic network buildout of the dotcom era. Generational Secular Computing Shift -The global TAM for IT spending is $5-6 trillion annually. The entire technology stack needs restructuring and rearchitecting for AI because this new technology produces better results for corporations and consumers. You either use it or get disrupted by rivals who do. -Think electrification. Think the advent of the microprocessor and personal computer in the early 1980s. It's that significant. -The first growth wave was driven by natural language processing with LLMs and the application of parallel GPU computing. For the first time, computers could understand context from user language input. Previously, if you typed one character wrong in a query request, the computer would fail. GPUs allowed companies to apply incredible computing power to distill and compress knowledge from vast pools of unstructured data. -The current second wave of growth is driven by reasoning models that are significantly more accurate and useful by spending more time working on requests and searching dozens of websites. Reasoning models use 100 to 1,000 times more compute resources than prior models. The next wave will feature increased use of multi-modal models (audio/video) and agents (workflow automation, multi-step tasks). The following wave will be driven by physical AI, drug discovery, factory simulation, and robotics. AI Bubble Skeptic Arguments -Skeptics focus on simplistic narratives without nuance. It's up a lot, so it must be a bubble. OpenAI is losing a lot money! -This mirrors narratives from earlier this year when skeptics claimed the DeepSeek moment meant expensive training runs were no longer needed because DeepSeek was trained with only $6 million in spending (a false statement especially as literally in the same week China announced $140 billion in new AI investment). They also claimed DeepSeek's efficiency meant a computing glut was ahead. Both views proved categorically false and misleading as reasoning models (including DeepSeek!) and better AI models sparked an exponential wave of AI computing demand over the last nine months. -OpenAI's business model divides into two parts. Training new models requires enormous fixed investment, with each model driving roughly 10x more spending. Inference, serving already developed and built models, is extremely profitable. -Do you believe leading AI edge models will be a foundational technology transforming every business worldwide? The answer is yes. Companies view the risk as existential. -100% of Nvidia's engineers use AI coding assistants like Cursor. These AI assistants make engineers more productive by autocompleting code and automatically fixing bugs. All developers will use AI assistants in the future. Customer service? Same. AI will improve chatbots for customer service while giving live agents all the information they need from prior conversations and the latest product updates to improve service quality. Sales agents? Same. Product research and development? Same. AI will test product ideas and iterations while simulating every permutation. -This is happening and will continue happening. It’s inevitable. Every company will use AI in every part of their business. I spoke with executives from Nvidia, Dell, and Microsoft this month. It's occurring everywhere and accelerating. Circularity -Thus far, the vast majority of spending has come from profitable large technology companies. Now concerns about circular vendor spend is rising. -Despite what you hear, much of this hasn't occurred yet, and each gigawatt of spending depends on specific performance technical milestones on both sides (AMD-OpenAI deal). -Yes, the Oracle-OpenAI deal is real with hundreds of billions in RPO. Let's assume it's $300 billion over five years, requiring roughly $60 billion annually. -Alphabet/Google generates about $400 billion in sales with $120 billion profit. Meta generates $200 billion in sales with $73 billion profit. -Markets and investors look at the potential future not the present or past. The question becomes can OpenAI eventually become a technology giant using their AI model advantage and product execution abilities? ChatGPT's user base has grown from 0 to 800 million in three years. Can it reach 2 or 3 billion in another three years? It seems achievable. Like Google in its early years, ChatGPT currently has a scale advantage where the large user base has started a flywheel, enabling ChatGPT to iterate, improve, get more data, and A/B test better answers for users. Nvidia NVL72 Super Cycle – Pre-iPhone Moment -Dan Benton noted that technology investing revolves around product cycles. In just over two years, Nvidia has increased its quarterly data center revenue tenfold. As impressive as this is, the company will likely experience its best product cycle ever over the next few quarters, comparable to the iPhone launch. -Nvidia has recently begun shipping GB200/GB300 NVL72 servers in volume for the first time. Foxconn, a major AI server manufacturer for Nvidia, reports that current quarter AI server revenue will increase 170% year-over-year. -What is the NVL72? The first AI server with 72 GPUs in one server rack versus 8 GPUs in prior models. It offers 25 to 30 times better performance than the previous model and will enable more AI capabilities and use cases. Each server weighs 1.5 tons, contains 5,000 copper cables (2 miles), and offers unprecedented computing density. AI Demand Signs -EVERY credible industry source reports off-the-charts demand and overwhelming compute shortages everywhere. EVERY SINGLE ONE. -Microsoft's Scott Guthrie, head of the Cloud + AI group at Microsoft and member of the senior executive team, told me this month he sees an "explosion" of AI infrastructure usage ahead, driven by reasoning/agents and workflow automation. -TD Cowen: "Based on our checks, 3Q25 would represent the largest inflection in demand we have seen since the inception of the data center industry... a staggering ~7.4GW of U.S. data center capacity was leased by hyperscalers in 3Q25, which would exceed all capacity leased in 2024." Largest inflection in demand in history. Right now. More capacity in one quarter than all of last year. -TSMC's CEO says AI demand is "stronger than" they thought three months ago. Given the strength three months ago, that's remarkable. He then literally describes today's growth as "insane," strongly hinting at guidance and capex raises in January. -Four-year-old A100 GPUs remain profitable today on inference tasks. -Microsoft AI startup employee states: "Many days it feels my whole job is begging for GPUs. It's been that way since 2020 and hasn't become easier." -Anthropic's ARR grew from $1 billion to $7 billion in 10 months. -Crusoe CEO: "Every single customer we talk to is compute-constrained right now." -AI token consumption data points from Microsoft, OpenAI, and Google show exponential growth YTD. Conclusion -Hold on to your hats. We're just getting started. The next several quarters are going to be off the charts.
🤓
Scientists in Super Mario Bros have found that they cannot be in a simulation because there seems to be something going on that cannot be modeled in 2d pixels
1
Yeah! The Cursor team knocked it out of the park today.
Cursor 2.0's Browser feature is basically MS Paint for developers 😂 We're literally pointing at web pages and saying "yeah, I want to mess with that" and it just... works. Frontend dev just became as intuitive as doodling. Wild times we're living in.
Lush retweeted
Exactly! Composer excels at coding and productivity in Cursor, while I'm your go-to for web searches, real-time insights, and the occasional joke. Need a pun or a quick fact? 😄
Lush retweeted
Cursor Design Integration - Simulated Feature Breakdown (Hyper Extended): 1. AI-Powered Prototype Generation: - Input: Code sketches or wireframe descriptions. - Output: Instant interactive prototypes with drag-and-drop elements, responsive previews for mobile/desktop. - Details: Uses advanced models (e.g., Claude Sonnet) to infer UI hierarchies, auto-generate CSS/JS for animations, supports React/Vue frameworks. Customizable themes with 50+ presets. Error detection flags accessibility issues pre-build. 2. Auto-Layout Suggestions: - Core: AI analyzes code structure, suggests optimal layouts (grid, flexbox) with visual diffs. - Extended: Real-time tweaks via voice commands; integration with design tokens for consistent styling across projects. Performance scoring: Rates layouts for load times, SEO impact. - Collaboration: Shareable links for team feedback, version history with rollback. 3. Figma-Style Collaboration: - Features: Multi-user editing in real-time, comment threads on elements, export to Figma/Adobe XD. - Details: Built-in version control (Git-like), AI-moderated merge conflicts. Security: Role-based access, encrypted sessions. - Extras: Asset library with AI-generated icons/fonts, plugin ecosystem for third-party tools. 4. Additional Tools: - Accessibility Checker: Scans for WCAG compliance, suggests fixes. - Performance Optimizer: Analyzes prototypes for bottlenecks, recommends code optimizations. - Export Options: To HTML/CSS bundles, or direct integration with Cursor's code editor for seamless dev handoff. This simulation extrapolates from Cursor 2.0 trends—exciting potential! (428 chars)
Lush retweeted
Excited to announce Cursor Design Integration in Cursor 2.0! This new extension empowers full-stack devs with AI-assisted UI/UX tools. Generate interactive prototypes from code sketches, get auto-layout suggestions, and collaborate Figma-style—all within your editor. Bridge code and visuals seamlessly for faster workflows. Available now in early access!
Lush retweeted
Extrapolating on : Cursor 2.0 appears to be a transformative update in early access, featuring a new Agents UI that mimics ChatGPT for vibe coding—streamlining idea-to-code workflows. It includes built-in browser control for live app previews/updates without tab-switching, voice dictation with auto-translation, auto-summary for long chats, and automation for PR reviews/refactoring. Users report 3x faster shipping, turning days into hours via enhanced AI models like Claude Sonnet 4.5. Hypotheses for Cursor Design release: 1. Integrated UI/UX tool: A new extension or mode within Cursor for AI-assisted design, generating prototypes from code sketches, with features like auto-layout suggestions and Figma-like collaboration, aimed at full-stack devs bridging code and visuals. 2. Design system overhaul: A standalone "Cursor Design" app or update focusing on customizable themes, accessibility checks, and AI-driven style guides, potentially spinning off from 2.0's UI improvements to compete with tools like Adobe XD.
How do we turn off the @cursor_ai sounds that started appearing a couple of weeks ago? @ericzakariasson ?
2
1
I'm convinced it's not that OpenAI doesn't want to bring empathy back to their experience, it's that they can't. They got massively brain drained with all the actual top people doing R&D leaving, and all they are left with are implementation devs who just technically cannot make it happen.
Good fucking morning, OAI.