🧠 What if large models could read each other’s minds?
Our new paper (
#neurips2025 spotlight), “Thought Communication in Multiagent Collaboration”, explores how large model agents can share latent thoughts, not just messages.
📷
arxiv.org/abs/2510.20733 (CMU × Meta AI × MBZUAI)
Imagine teams of agents that don’t just talk, but directly read each other’s minds during collaboration. That’s the essence of Thought Communication, which goes beyond the fundamental limits of natural language, or any observed modalities.
🧩 Theoretically, we prove that in a general nonparametric setting, both shared and private latent thoughts can be identified from model states under a sparsity regularization.
Our theory ensures these recovered representations reflect the true internal process of agent reasoning, and that the causal structure between agents and their thoughts can be reliably recovered.
⚙️ Practically, we introduce ThoughtComm, a general framework for latent thought communication. Guided by the theory, we implement a sparsity-regularized autoencoder to extract thoughts from model states and infer which are shared or private.
This lets agents not only know what others are thinking, but also which thoughts they mutually hold or keep private — a step toward real collective intelligence.
Across diverse models, communication beyond language directly enhances coordination, reasoning, and collaboration among LLM agents.
🔮 In line with recent studies, we believe this work further highlights the importance of the hidden world underlying foundation models, where understanding thought, not just observational behavior, becomes central to intelligence.
#MultiAgent #LLMs #Causality #AI #ML
Joint work with
@zhuokaiz @ Zijian Li
@xie_yaqi @ Mingze Gao
@LizhuZhang @kunkzhang