M$^3$Searcher: Modular Multimodal Information Seeking Agency with Retrieval-Oriented Reasoning
By: Xiaohan Yu , Chao Feng , Lang Mei and more
Recent advances in DeepResearch-style agents have demonstrated strong capabilities in autonomous information acquisition and synthesize from real-world web environments. However, existing approaches remain fundamentally limited to text modality. Extending autonomous information-seeking agents to multimodal settings introduces critical challenges: the specialization-generalization trade-off that emerges when training models for multimodal tool-use at scale, and the severe scarcity of training data capturing complex, multi-step multimodal search trajectories. To address these challenges, we propose M$^3$Searcher, a modular multimodal information-seeking agent that explicitly decouples information acquisition from answer derivation. M$^3$Searcher is optimized with a retrieval-oriented multi-objective reward that jointly encourages factual accuracy, reasoning soundness, and retrieval fidelity. In addition, we develop MMSearchVQA, a multimodal multi-hop dataset to support retrieval centric RL training. Experimental results demonstrate that M$^3$Searcher outperforms existing approaches, exhibits strong transfer adaptability and effective reasoning in complex multimodal tasks.
Similar Papers
InSight-o3: Empowering Multimodal Foundation Models with Generalized Visual Search
CV and Pattern Recognition
AI learns to understand pictures better.
DeepMMSearch-R1: Empowering Multimodal LLMs in Multimodal Web Search
CV and Pattern Recognition
Lets computers search the web for answers.
Seeing, Listening, Remembering, and Reasoning: A Multimodal Agent with Long-Term Memory
CV and Pattern Recognition
Helps robots remember and learn from videos.