Re-ranking Reasoning Context with Tree Search Makes Large Vision-Language Models Stronger
By: Qi Yang , Chenghao Zhang , Lubin Fan and more
Potential Business Impact:
Helps computers answer questions about pictures better.
Recent advancements in Large Vision Language Models (LVLMs) have significantly improved performance in Visual Question Answering (VQA) tasks through multimodal Retrieval-Augmented Generation (RAG). However, existing methods still face challenges, such as the scarcity of knowledge with reasoning examples and erratic responses from retrieved knowledge. To address these issues, in this study, we propose a multimodal RAG framework, termed RCTS, which enhances LVLMs by constructing a Reasoning Context-enriched knowledge base and a Tree Search re-ranking method. Specifically, we introduce a self-consistent evaluation mechanism to enrich the knowledge base with intrinsic reasoning patterns. We further propose a Monte Carlo Tree Search with Heuristic Rewards (MCTS-HR) to prioritize the most relevant examples. This ensures that LVLMs can leverage high-quality contextual reasoning for better and more consistent responses. Extensive experiments demonstrate that our framework achieves state-of-the-art performance on multiple VQA datasets, significantly outperforming In-Context Learning (ICL) and Vanilla-RAG methods. It highlights the effectiveness of our knowledge base and re-ranking method in improving LVLMs. Our code is available at https://github.com/yannqi/RCTS-RAG.
Similar Papers
MCTS-RAG: Enhancing Retrieval-Augmented Generation with Monte Carlo Tree Search
Computation and Language
Helps small AI understand hard questions better.
VReST: Enhancing Reasoning in Large Vision-Language Models through Tree Search and Self-Reward Mechanism
CV and Pattern Recognition
Helps computers solve tricky math problems better.
Socratic-MCTS: Test-Time Visual Reasoning by Asking the Right Questions
CV and Pattern Recognition
Finds hidden answers in old AI models.