Multi-Step Reasoning for Embodied Question Answering via Tool Augmentation
By: Mingliang Zhai , Hansheng Liang , Xiaomeng Fan and more
Potential Business Impact:
Robot learns to find answers by thinking.
Embodied Question Answering (EQA) requires agents to explore 3D environments to obtain observations and answer questions related to the scene. Existing methods leverage VLMs to directly explore the environment and answer questions without explicit thinking or planning, which limits their reasoning ability and results in excessive or inefficient exploration as well as ineffective responses. In this paper, we introduce ToolEQA, an agent that integrates external tools with multi-step reasoning, where external tools can provide more useful information for completing the task, helping the model derive better exploration directions in the next step of reasoning and thus obtaining additional effective information. This enables ToolEQA to generate more accurate responses with a shorter exploration distance. To enhance the model's ability for tool-usage and multi-step reasoning, we further design a novel EQA data generation pipeline that automatically constructs large-scale EQA tasks with reasoning trajectories and corresponding answers. Based on the pipeline, we collect the EQA-RT dataset that contains about 18K tasks, divided into a training set EQA-RT-Train, and two test sets EQA-RT-Seen (scenes overlapping with the training set) and EQA-RT-Unseen (novel scenes). Experiments on EQA-RT-Seen and EQA-RT-Unseen show that ToolEQA improves the success rate by 9.2~20.2% over state-of-the-art baselines, while outperforming the zero-shot ToolEQA by 10% in success rate. In addition, ToolEQA also achieves state-of-the-art performance on the HM-EQA, OpenEQA, and EXPRESS-Bench datasets, demonstrating its generality. Our homepage see https://tooleqa.github.io.
Similar Papers
ToolVQA: A Dataset for Multi-step Reasoning VQA with External Tools
Artificial Intelligence
Helps computers use tools to solve problems.
Beyond the Destination: A Novel Benchmark for Exploration-Aware Embodied Question Answering
CV and Pattern Recognition
Helps robots explore and answer questions better.
BridgeEQA: Virtual Embodied Agents for Real Bridge Inspections
CV and Pattern Recognition
Helps robots inspect bridges by answering questions.