"We are a standard breakthrough and business-as-usual engineering away from AGI." A detailed breakdown of all the bottlenecks to AGI.

Oct 22, 2025 · 5:53 PM UTC

5
20
1
195
Replying to @DanHendrycks
This is nonsense. Like saying that my Tesla is 75% airplane: It's fast, carries passengers, and has auto-pilot -- just doesn't fly (yet) If it can't learn incrementally in real-time (and LLMs simply cannot) is won't get to AGI petervoss.substack.com/p/cog…
1
1
Replying to @DanHendrycks
People should ask AI when AI thinks AGI will be available.
Replying to @DanHendrycks
You didn't mention epistemology, which is the most crucial bottleneck.
Replying to @timo_kaleva
The Sampo AGI Framework: The Only Viable Path to Ethical Artificial General Intelligence #SampoAGI #ethicalAI #ASI #AIgovernance Timo Kaleva, Finland 15.5.2025 Abstract Artificial General Intelligence (AGI) is often presented as the next leap in technological evolution, but without a shared epistemic foundation, it poses more existential risk than promise. This paper introduces the Sampo AGI Framework, giving a mathematically grounded and philosophically coherent approach to AGI rooted in my theory of truth as the integral of belief over time. I argue that no AGI system can be secure, ethical, or truly intelligent without integrating human belief structures through a cognitively secure, decentralised mechanism. Sampo is presented as the only practical implementation of such a system: a trust-centric, blockchain-secured, community-governed belief architecture capable of preserving the diversity of perspectives while guiding consensus on essential matters. I demonstrate that Sampo the only viable AGI framework in the face of global epistemic fragmentation and accelerating complexity. Other AGI paradigms lack the capacity to define or verify truth, making them ethically non-viable. Only through a system like Sampo, which is anchored in temporal, participatory truth, can humanity ensure the rise of AGI aligned with freedom, transparency, and the flourishing of collective consciousness.
Replying to @DanHendrycks
largely agreed. Not sure if this fits under On-the-Spot Reasoning (R) or Working Memory (WM) or others. But not only the today's ai amnesiac every session, but even across multiple inferences (2/2+). You're absolutely right to pick on the severity of the amnesia of current systems. They do not have the latent plans or thoughts of prior inferences, all they ever have is the chat history in view. Semantic Drift as a result of the telephone game effects. I think it's possible to have a kind of latent persistence to assist with getting around the amnesia --despite a largely frozen. Whether it's just a world model style (single) side car, or a full on latent ladder. If model had a way to throw latents across diff timescales, that kind of rolling latent context or variant of working memory would be helpful to the models. Ie: stacked horizon embeddings of horizon subscribed context akin to a temporal decomposition; each acting like orthogonal reps of latent policy of a given horizon influencing the next inference. relying on RL on chat history to plan better likely would help somewhat, but doesn't solve the amnesia issue. a clever way to hold (& forget ) context would be helpful than today's zero latent carry; & no amount of continuous learning would patch that, sparse weight updates would be more akin to knowledge updates imo. Feels like a pretty severe gap in current ai systems. And a bit yikes from an information theoretic perspective. RNN style trad recurrence -> constantly updating a singular latent state but evolves it whole, monolithically, frequently. Information generally doesnt last many hops. Transformers -> full repeated latent breaks as nothing survives end of inference except symbolic text. (loses internal gist and internal goals that created said text). Information again drifts over durations as internal context must be repeatedly inferred from what survived into window.