Huge AI news from Google Google Research just unveiled a bold new ML paradigm that views a model as a stack of nested problems so it can keep learning new skills forever without forgetting the old ones, a huge leap toward AI that actually evolves like a brain. 👀 On tests of language modeling, long context reasoning and continual learning, Hope (self modifying architecture) outperformed traditional transformer architectures and older methods. Big progress

Nov 7, 2025 · 7:31 PM UTC

Replying to @Dr_Singularity
sounds like they're trying to solve the "catastrophic forgetting" issue that plagues ML models
2
Replying to @Dr_Singularity
Nested learning? The discoveries are not only coming daily they are now coming near hourly...well for me at least...Holy crap...mind exploded again...or as I say #Smackereled
11
Replying to @Dr_Singularity
If this scales, we might finally see AI that doesn’t need retraining every time—just keeps growing. Revolutionary.
7
Replying to @Dr_Singularity
This could be a real shift from static training toward lifelong learning systems.
Replying to @Dr_Singularity
If this scales, it’s the closest we’ve come to giving machines true memory, not storage, but understanding that compounds.
Replying to @Dr_Singularity
That’s a massive conceptual leap. If Hope truly enables continuous, non-destructive learning, it is beginning of self-evolving AI systems — models that will grow their intelligence over time. Transformers changed scale, this could change how AI thinks.
6
Replying to @Dr_Singularity
Until you turn power off?
Replying to @Dr_Singularity
Google's Hope sounds like it's gonna replace Chang's little cookie jar AI that keeps cookie jar in a cookie jar
Replying to @Dr_Singularity
Just read the article seems like an impressive way to deal with the current problems that transformers carry with the CF
1
Replying to @Dr_Singularity
Huge for digital twins
Replying to @Dr_Singularity
Oh, that's pretty significant, I think.
1