Huge AI news from Google
Google Research just unveiled a bold new ML paradigm that views a model as a stack of nested problems so it can keep learning new skills forever without forgetting the old ones, a huge leap toward AI that actually evolves like a brain. 👀
On tests of language modeling, long context reasoning and continual learning, Hope (self modifying architecture) outperformed traditional transformer architectures and older methods.
Big progress
That’s a massive conceptual leap.
If Hope truly enables continuous, non-destructive learning, it is beginning of self-evolving AI systems — models that will grow their intelligence over time.
Transformers changed scale, this could change how AI thinks.
Nov 8, 2025 · 12:04 AM UTC


