Breaking down successful strategies for scaling - through the lens of builders and investors | Scaling strategies + Growth frameworks + Tech + Fundraising.

Joined November 2010
Kleiner Perkins sold their stake back to Sahil Lavingia for $1. After raising a $7M Series A, Gumroad couldn’t close a Series B. Despite growing steadily (~60–80% annually), VCs didn’t see hypergrowth. Sahil calls this the real VC trap: being “good but not great.” Instead of selling or shutting down, he downsized, ran lean, and made Gumroad profitable. A few years later, Kleiner reached out: “We’d be interested in selling our stake back to you for one dollar.” That was the entire email. Why would a top-tier VC do that? Today, Gumroad runs like a mini public company: profitable, lean, distributing dividends, and running share buybacks via open auctions. Early investors who got in at $0.60 have sold some of their shares back at $4+. There’s real math behind this “third path” between unicorn success and total failure. It demands strong unit economics and ruthless operational discipline but it works. Most founders don’t even consider it. We’re taught to think binary: unicorn or bust. But the middle path (profitable, sustainable, customer-first) can often win in the long run.
Who's shipping this weekend?
Want to establish thought leadership in this evolving space? We help founders develop the visibility and authority that positions them ahead of major industry shifts. DM me to discuss building genuine expertise in rapidly changing markets.
14/ Success in AI requires understanding these architectural shifts. The companies that thrive will be led by founders who grasp these transitions early and establish themselves as forward-thinking experts, not just current implementers.
13/ For founders building in AI, this highlights a strategic insight: The most valuable companies won't just optimize today's models. They'll anticipate and position for the next generation of AI architectures. Timing and positioning matter.
12/ LeCun predicts AI capable of genuine scientific discovery within 3-5 years. Not through brute-force scaling alone. Through systems that build internal world models, reason abstractly, and understand physical reality at a deeper level.
11/ This follows a familiar pattern in technology: Incumbents optimize current paradigms extensively. Breakthrough advances often come from architectural shifts that change the fundamental approach entirely. New paradigms eventually win.
10/ "We will not achieve human-level AI by simply scaling up LLMs with more data and compute, a fundamentally new approach is needed." While others bet heavily on scaling existing architectures, Meta is exploring different foundations.
9/ "The pen problem" illustrates the challenge. Drop a vertically balanced pen. Which direction will it fall? Humans predict this intuitively through physics understanding. Current AI systems struggle with such basic physical reasoning.
8/ The learning efficiency gap is striking: A 4-year-old processes roughly 10^14 bytes of visual information, comparable to GPT-4's training data. Yet they develop sophisticated physics understanding that current AI cannot match. LeCun's architecture targets this efficiency.
7/ Early results show promise. May 2023: Meta's V-JEPA detected physically impossible events in videos without explicit physics training. The system learned fundamental physical principles through observation alone. No programmed rules required.
6/ Joint Embedding Predictive Architectures (JEPA). Instead of predicting every pixel or token, JEPAs learn abstract representations. They focus on relationships between objects rather than exact appearances. It's a different path to machine intelligence.
5/ Four critical capabilities missing from current AI: - Understanding the physical world - Persistent memory systems - Reasoning capabilities - Advanced planning LeCun identified these gaps early. Meta's research addresses each systematically.
4/ "LLMs are designed to regurgitate information based on text statistics, not create new connections or discoveries." This isn't just academic criticism. At Meta, LeCun is building the alternative approach.
3/ His core insight challenges the industry consensus: Humans don't think primarily in language. We build mental models of reality that we manipulate internally. LLMs predict text tokens based on statistical patterns. They lack genuine understanding of physical reality.
2/ LeCun is direct about current LLM limitations. "They're approaching fundamental constraints." These models have nearly exhausted available text data (10^13-10^14 tokens). Each improvement now requires exponentially more resources. The scaling curve is flattening.
1/ Meta has invested $35B+ in AI research since 2013. While competitors focus on scaling language models, LeCun (Turing Award winner, 2018) has been developing a different architecture entirely. The approaches are diverging significantly.
Yann LeCun is the godfather of AI. Yet, not many people know about him. He is Meta's Chief AI Scientist. And he's building something that challenges the current approach taken by OpenAI, Anthropic, and Google. His roadmap reveals why Meta's long-term AI strategy might create significant advantages. Here's what he's working on 🧵
Now is the best time to build. Either for yourself, or as part of a founding team at a startup.