Enabling Agents to Communicate Entirely in Latent Space
By: Zhuoyun Du , Runze Wang , Huiyu Bai and more
Potential Business Impact:
Lets AI share thoughts directly, faster and smarter.
While natural language is the de facto communication medium for LLM-based agents, it presents a fundamental constraint. The process of downsampling rich, internal latent states into discrete tokens inherently limits the depth and nuance of information that can be transmitted, thereby hindering collaborative problem-solving. Inspired by human mind-reading, we propose Interlat (Inter-agent Latent Space Communication), a paradigm that leverages the last hidden states of an LLM as a representation of its mind for direct transmission (termed latent communication). An additional compression process further compresses latent communication via entirely latent space reasoning. Experiments demonstrate that Interlat outperforms both fine-tuned chain-of-thought (CoT) prompting and single-agent baselines, promoting more exploratory behavior and enabling genuine utilization of latent information. Further compression not only substantially accelerates inference but also maintains competitive performance through an efficient information-preserving mechanism. We position this work as a feasibility study of entirely latent space inter-agent communication, and our results highlight its potential, offering valuable insights for future research.
Similar Papers
Thought Communication in Multiagent Collaboration
Machine Learning (CS)
Lets AI agents share thoughts directly, like telepathy.
Latent Collaboration in Multi-Agent Systems
Computation and Language
AI models work together better in their minds.
Latent Reasoning in LLMs as a Vocabulary-Space Superposition
Computation and Language
Makes computers think faster, using less power.