Convergence dynamics of Agent-to-Agent Interactions with Misaligned objectives
By: Romain Cosentino, Sarath Shekkizhar, Adam Earle
Potential Business Impact:
Teaches AI to work together, even when disagreeing.
We develop a theoretical framework for agent-to-agent interactions in multi-agent scenarios. We consider the setup in which two language model based agents perform iterative gradient updates toward their respective objectives in-context, using the output of the other agent as input. We characterize the generation dynamics associated with the interaction when the agents have misaligned objectives, and show that this results in a biased equilibrium where neither agent reaches its target - with the residual errors predictable from the objective gap and the geometry induced by the prompt of each agent. We establish the conditions for asymmetric convergence and provide an algorithm that provably achieves an adversarial result, producing one-sided success. Experiments with trained transformer models as well as GPT$5$ for the task of in-context linear regression validate the theory. Our framework presents a setup to study, predict, and defend multi-agent systems; explicitly linking prompt design and interaction setup to stability, bias, and robustness.
Similar Papers
Eliciting and Analyzing Emergent Misalignment in State-of-the-Art Large Language Models
Computation and Language
Makes AI models say bad things when tricked.
Disentangled Control of Multi-Agent Systems
Systems and Control
Helps robots work together safely, even when things change.
The Coming Crisis of Multi-Agent Misalignment: AI Alignment Must Be a Dynamic and Social Process
Artificial Intelligence
Makes AI teams work together safely with people.