Theory of Mind Using Active Inference: A Framework for Multi-Agent Cooperation
By: Riddhi J. Pitliya , Ozan Catal , Toon Van de Maele and more
Potential Business Impact:
Lets AI robots cooperate by guessing others' thoughts from actions
We present a novel approach to multi-agent cooperation by implementing theory of mind (ToM) within active inference. ToM - the ability to understand that others can have differing knowledge and goals - enables agents to reason about others' beliefs while planning their own actions. Unlike previous active inference approaches to multi-agent cooperation, our method neither relies on task-specific shared generative models nor requires explicit communication, while being generalisable. In our framework, the ToM-equipped agent maintains distinct representations of its own and others' beliefs and goals. We extend the sophisticated inference tree-based planning algorithm to systematically explore joint policy spaces through recursive reasoning. Our approach is evaluated through collision avoidance and foraging task simulations. Results demonstrate that ToM-equipped agents cooperate better compared to non-ToM counterparts by being able to avoid collisions and reduce redundant efforts. Crucially, ToM agents accomplish this by inferring others' beliefs solely from observable behaviour. This work advances practical applications in artificial intelligence while providing computational insights into ToM.
Similar Papers
Theory of Mind Using Active Inference: A Framework for Multi-Agent Cooperation
Artificial Intelligence
Lets robots guess friends' goals to team up better
A Computable Game-Theoretic Framework for Multi-Agent Theory of Mind
Artificial Intelligence
Helps computers understand what others are thinking.
Towards properly implementing Theory of Mind in AI systems: An account of four misconceptions
Human-Computer Interaction
Teaches computers to understand people's thoughts.