ToolScope: An Agentic Framework for Vision-Guided and Long-Horizon Tool Use
By: Mengjie Deng, Guanting Dong, Zhicheng Dou
Potential Business Impact:
Helps computers understand pictures and answer questions.
Recently, large language models (LLMs) have demonstrated remarkable problem-solving capabilities by autonomously integrating with external tools for collaborative reasoning. However, due to the inherently complex and diverse nature of multimodal information, enabling multimodal large language models (MLLMs) to flexibly and efficiently utilize external tools during reasoning remains an underexplored challenge. In this work, we introduce ToolScope, an agentic framework designed to unify global planning with local multimodal perception, adopting a specialized Perceive tool to mitigates visual context degradation in long-horizon VQA task. ToolScope comprises three primary components: the Global Navigator, the Agentic Executor, and the Response Synthesizer. The Global Navigator functions as a "telescope", offering high-level strategic guidance. The Agentic Executor operates iteratively to augment MLLM with local perception through the integration of external tools-Search, Code, and Perceive. Finally, the Response Synthesizer consolidates and organizes the reasoning process into a coherent, user-friendly output. We evaluate ToolScope on four VQA benchmarks across diverse domains, including VQA 2.0, ScienceQA, MAT-Search and MathVista. It demonstrates strong generalization capabilities, achieving an average performance improvement of up to +6.69% across all datasets.
Similar Papers
ToolScope: Enhancing LLM Agent Tool Use through Tool Merging and Context-Aware Filtering
Computation and Language
Helps AI pick the right tools faster.
AgentScope 1.0: A Developer-Centric Framework for Building Agentic Applications
Artificial Intelligence
Helps computers build smart helpers for tasks.
Multi-Faceted Evaluation of Tool-Augmented Dialogue Systems
Computation and Language
Finds hidden mistakes in talking computer helpers.