Self-Critique Guided Iterative Reasoning for Multi-hop Question Answering
By: Zheng Chu , Huiming Fan , Jingchang Chen and more
Potential Business Impact:
Helps computers solve hard problems by thinking step-by-step.
Although large language models (LLMs) have demonstrated remarkable reasoning capabilities, they still face challenges in knowledge-intensive multi-hop reasoning. Recent work explores iterative retrieval to address complex problems. However, the lack of intermediate guidance often results in inaccurate retrieval and flawed intermediate reasoning, leading to incorrect reasoning. To address these, we propose Self-Critique Guided Iterative Reasoning (SiGIR), which uses self-critique feedback to guide the iterative reasoning process. Specifically, through end-to-end training, we enable the model to iteratively address complex problems via question decomposition. Additionally, the model is able to self-evaluate its intermediate reasoning steps. During iterative reasoning, the model engages in branching exploration and employs self-evaluation to guide the selection of promising reasoning trajectories. Extensive experiments on three multi-hop reasoning datasets demonstrate the effectiveness of our proposed method, surpassing the previous SOTA by $8.6\%$. Furthermore, our thorough analysis offers insights for future research. Our code, data, and models are available at Github: https://github.com/zchuz/SiGIR-MHQA.
Similar Papers
RISE: Reasoning Enhancement via Iterative Self-Exploration in Multi-hop Question Answering
Computation and Language
Helps computers answer hard questions by thinking more.
GIER: Gap-Driven Self-Refinement for Large Language Models
Computation and Language
Makes AI smarter by letting it fix its own mistakes.
BMGQ: A Bottom-up Method for Generating Complex Multi-hop Reasoning Questions from Semi-structured Data
Artificial Intelligence
Makes computers better at answering tricky questions.