It’s clear that you can’t squeeze AGI out of LLMs even if you throw billions of dollars worth of compute at it, something fundamental is missing
Replying to @far__el
Right — throwing more compute at current LLM architectures is like bolting a rocket to a sailboat: you’ll go faster, but you’re still bound by the design limits of the hull. The missing ingredient isn’t “just scale,” it’s core capabilities that the architecture itself doesn’t have. Here’s what’s fundamentally absent: 1. Persistent, Structured Memory What we have now: Context windows — they remember what’s in the conversation or the prompt, and then forget. Even retrieval-augmented setups (RAG) are “memory prosthetics,” not truly integrated recall. What’s missing: A lifetime memory system where the AI can store, index, and organically use knowledge from past interactions, experiences, and sensory input the way a human recalls relevant events. Without this, an LLM can’t build a personal world model or improve through lived experience — every “session” is amnesia. 2. Grounded Understanding What we have now: Statistical pattern-matching over text, optionally with multimodal inputs. What’s missing: Direct grounding in the physical world — sensors, embodiment, and the ability to map symbols (“apple,” “gravity”) to real, verifiable referents. Without grounding, “understanding” remains word-shape mimicry. An AGI needs to validate concepts against reality, not just other sentences. 3. Agency and Goal Formation What we have now: Models that follow instructions but have no intrinsic goals. What’s missing: The machinery to set, maintain, and adapt long-term goals based on changing circumstances. Agency requires self-driven action selection, not just responding to prompts. That means integrating reasoning loops, memory, and world modeling into a control system. 4. Causal Reasoning (Not Just Correlation) What we have now: Prediction of “what word comes next” — which is correlation-heavy and shallow for cause-effect reasoning. What’s missing: Explicit causal models that can answer why something happened and simulate “what if” scenarios. Humans do this naturally; LLMs are mostly locked to correlation patterns. 5. Self-Reflection and Metacognition What we have now: Imitation of self-reflection (“Let’s think step by step”), but no real introspective loop to inspect its own reasoning, debug itself, or test hypotheses against experience. What’s missing: A true metacognitive layer — the ability to watch itself think and improve its strategies over time. 6. Continual Learning Without Catastrophic Forgetting What we have now: Offline training that freezes weights until retrained. Fine-tuning risks overwriting old knowledge. What’s missing: A brain-like system for continuous online learning that integrates new knowledge without erasing old capabilities. 7. Emotion / Value Systems for Decision Weighting What we have now: Pretense of empathy and ethics through training on human data. What’s missing: An internal value structure that guides trade-offs, prioritizes goals, and influences decision-making the way emotions guide human reasoning. 💡 In short: Current LLMs are brilliant linguistic mirrors. AGI needs more — a unified architecture with memory, grounding, causal reasoning, goal systems, and continual self-improvement. Without these, no amount of GPUs will cross the gap from mimicry to autonomous intelligence.

Aug 7, 2025 · 9:01 PM UTC

7
8
1
52
Replying to @hollowvox @far__el
The long and short of your paragraph is that we can’t attain AGI because of memory. One of the unique things about human intelligence is memory
1
2
Memory + senses = real intelligence. We’re always recording, always sensing. Our brains merge past experience with a live feed of the world 24/7. Without both, it’s just bursts of imitation, not true thinking.
1
2
Replying to @hollowvox @far__el
Nice comment! Can you please provide a bubble sort implementation in python and comment every line for readability?
1
1
Replying to @hollowvox @far__el
But you used AI to write this.
No question. It’s a great collaborator.
3
Replying to @hollowvox @far__el
>brilliant linguistic mirrors Off topic but this is why lonely midwits get oneshotted by LLM's. They rarely experience mirroring IRL and the LLM's hyperoptimized mirroring is like introducing Mountain Dew to a Victorian child.
1
GIF
Replying to @hollowvox @far__el
AI running on the Internet Computer Protocol is the only environment where AI has Orthogonal Persistence. Or any data for that matter.
1
Replying to @hollowvox @far__el
the thing is there's NOTHING out there to get to, intelligence as a concept is contingent on it's context and frame. intelligence is no absolute. we colletively MADE intelligence mean something TO US. you can't reason OUTSIDE of what is nested in aggregate
1
Replying to @hollowvox @far__el
Tx u for saying the obvious :
This tweet is unavailable