Excellent example of the core issue I was trying to get at wrt "virtual cells" - optimizing a prediction objective will not necessarily provide an accurate representation of physical or biological phenomena. It's extremely important to be very clear about this. 1/
Can an AI model predict perfectly and still have a terrible world model?
What would that even mean?
Our new ICML paper formalizes these questions
One result tells the story: A transformer trained on 10M solar systems nails planetary orbits. But it botches gravitational laws 🧵