PathFinder: MCTS and LLM Feedback-based Path Selection for Multi-Hop Question Answering
By: Durga Prasad Maram , Kalpa Gunaratna , Vijay Srinivasan and more
Multi-hop question answering is a challenging task in which language models must reason over multiple steps to reach the correct answer. With the help of Large Language Models and their reasoning capabilities, existing systems are able to think and decompose an input question over multiple steps to analyze, retrieve, and reason. However, training-based approaches for this problem still suffer from LLM hallucinations and incorrect reasoning paths that hinder performance. Hence, we propose PATHFINDER, an approach that: (i) uses Monte Carlo Tree Search to generate training path traces, (ii) improves training data quality by filtering erroneous and lengthy traces using sub-answer recall and LLM-as-a-judge verification, and (iii) reformulates sub-queries to handle failed retrieval cases. By following these steps, we demonstrate that PATHFINDER improves the performance of multi-hop QA over public benchmark datasets.
Similar Papers
Efficient Multi-Hop Question Answering over Knowledge Graphs via LLM Planning and Embedding-Guided Search
Computation and Language
Answers questions using facts, not guessing.
PRISM: Agentic Retrieval with LLMs for Multi-Hop Question Answering
Computation and Language
Finds the right facts to answer hard questions.
Combining LLMs with Logic-Based Framework to Explain MCTS
Artificial Intelligence
Explains how smart computers make decisions.