To advance foundational AI, we must solve continual learning and catastrophic forgetting. New research by our team introduces Nested Learning (NL), a paradigm that views an ML model as a system of nested optimization problems. This approach unifies architecture and optimization, creating a deeper computational capacity for learning. This is a crucial step toward creating models with the continual learning abilities seen in the human brain. More in the blog by Vahab Mirrokni and Ali Behrouz: goo.gle/47IOCjP Read the NeurIPS 2025 paper: abehrouz.github.io/files/NL.…
NL provides a framework for multi-time scale updates, allowing different components of the model to adjust at different frequencies.

Nov 8, 2025 · 6:48 AM UTC

9