Improve Rule Retrieval and Reasoning with Self-Induction and Relevance ReEstimate
By: Ziyang Huang , Wangtao Sun , Jun Zhao and more
Potential Business Impact:
Helps computers find the right rules for thinking.
This paper systematically addresses the challenges of rule retrieval, a crucial yet underexplored area. Vanilla retrieval methods using sparse or dense retrievers to directly search for relevant rules to support downstream reasoning, often suffer from low accuracy. This is primarily due to a significant semantic gap between the instantiated facts in the queries and the abstract representations of the rules. Such misalignment results in suboptimal retrieval quality, which in turn negatively impacts reasoning performance. To overcome these challenges, we propose Self-Induction Augmented Retrieval (SIAR), a novel approach that utilizes Large Language Models (LLMs) to induce potential inferential rules that might offer benefits for reasoning by abstracting the underlying knowledge and logical structure in queries. These induced rules are then used for query augmentation to improve retrieval effectiveness. Additionally, we introduce Rule Relevance ReEstimate (R$^3$), a method that re-estimates the relevance of retrieved rules by assessing whether the abstract knowledge they contain can be instantiated to align with the facts in the queries and the helpfulness for reasoning. Extensive experiments across various settings demonstrate the effectiveness and versatility of our proposed methods.
Similar Papers
Think Before You Retrieve: Learning Test-Time Adaptive Search with Small Language Models
Artificial Intelligence
Teaches small computers to find information better.
ARise: Towards Knowledge-Augmented Reasoning via Risk-Adaptive Search
Artificial Intelligence
Helps computers solve hard problems by checking their work.
RAISE: Enhancing Scientific Reasoning in LLMs via Step-by-Step Retrieval
Computation and Language
Helps computers solve hard science problems.