Score: 0

InteGround: On the Evaluation of Verification and Retrieval Planning in Integrative Grounding

Published: September 20, 2025 | arXiv ID: 2509.16534v1

By: Cheng Jiayang , Qianqian Zhuang , Haoran Li and more

Potential Business Impact:

Helps computers combine facts to answer questions.

Business Areas:
Semantic Search Internet Services

Grounding large language models (LLMs) in external knowledge sources is a promising method for faithful prediction. While existing grounding approaches work well for simple queries, many real-world information needs require synthesizing multiple pieces of evidence. We introduce "integrative grounding" -- the challenge of retrieving and verifying multiple inter-dependent pieces of evidence to support a hypothesis query. To systematically study this problem, we repurpose data from four domains for evaluating integrative grounding capabilities. Our investigation reveals two critical findings: First, in groundedness verification, while LLMs are robust to redundant evidence, they tend to rationalize using internal knowledge when information is incomplete. Second, in examining retrieval planning strategies, we find that undirected planning can degrade performance through noise introduction, while premise abduction emerges as a promising approach due to its logical constraints. Additionally, LLMs' zero-shot self-reflection capabilities consistently improve grounding quality. These insights provide valuable direction for developing more effective integrative grounding systems.

Country of Origin
🇭🇰 Hong Kong

Page Count
16 pages

Category
Computer Science:
Computation and Language