PerspAct: Enhancing LLM Situated Collaboration Skills through Perspective Taking and Active Vision
By: Sabrina Patania , Luca Annese , Anita Pellegrini and more
Potential Business Impact:
Helps robots understand what others see.
Recent advances in Large Language Models (LLMs) and multimodal foundation models have significantly broadened their application in robotics and collaborative systems. However, effective multi-agent interaction necessitates robust perspective-taking capabilities, enabling models to interpret both physical and epistemic viewpoints. Current training paradigms often neglect these interactive contexts, resulting in challenges when models must reason about the subjectivity of individual perspectives or navigate environments with multiple observers. This study evaluates whether explicitly incorporating diverse points of view using the ReAct framework, an approach that integrates reasoning and acting, can enhance an LLM's ability to understand and ground the demands of other agents. We extend the classic Director task by introducing active visual exploration across a suite of seven scenarios of increasing perspective-taking complexity. These scenarios are designed to challenge the agent's capacity to resolve referential ambiguity based on visual access and interaction, under varying state representations and prompting strategies, including ReAct-style reasoning. Our results demonstrate that explicit perspective cues, combined with active exploration strategies, significantly improve the model's interpretative accuracy and collaborative effectiveness. These findings highlight the potential of integrating active perception with perspective-taking mechanisms in advancing LLMs' application in robotics and multi-agent systems, setting a foundation for future research into adaptive and context-aware AI systems.
Similar Papers
Growing Perspectives: Modelling Embodied Perspective Taking and Inner Narrative Development Using Large Language Models
Computation and Language
Helps computers understand and work together better.
Who Sees What? Structured Thought-Action Sequences for Epistemic Reasoning in LLMs
Artificial Intelligence
Helps robots understand what others see.
Large Language Models and 3D Vision for Intelligent Robotic Perception and Autonomy
Robotics
Robots understand and act on spoken commands.