Osaka University’s SANKEN team built “MicroAdapt,” a tiny on-device learner that watches a live data stream, carves it into recurring patterns, and keeps a small specialist model for each pattern. As conditions drift, it spawns new candidates, merges or prunes the weak ones, and keeps optimizing; more like an evolving colony than a single heavy neural net. Because it never hauls data to the cloud and the models stay small, it learns and predicts in real time on a Raspberry Pi drawing under ~1.7 W.
In their tests, that design delivered up to 100,000× faster processing and about 60% lower error than deep-learning baselines on streaming time-series tasks, while automatically spotting regime changes and forecasting ahead. That makes it a fit for sensors, wearables, factory lines, vehicles, and any place where data distribution shifts and latency or power budgets rule.
This isn’t a replacement for big LLMs or GPU training; it’s a different lane: adaptive, on-device modeling for non-stationary streams. If the results hold up across more datasets and hardware, you get devices that quietly get better on their own without shipping your data off-device.