Knowing What's Missing: Assessing Information Sufficiency in Question Answering
By: Akriti Jain, Aparna Garimella
Potential Business Impact:
Helps computers know when they don't know answers.
Determining whether a provided context contains sufficient information to answer a question is a critical challenge for building reliable question-answering systems. While simple prompting strategies have shown success on factual questions, they frequently fail on inferential ones that require reasoning beyond direct text extraction. We hypothesize that asking a model to first reason about what specific information is missing provides a more reliable, implicit signal for assessing overall sufficiency. To this end, we propose a structured Identify-then-Verify framework for robust sufficiency modeling. Our method first generates multiple hypotheses about missing information and establishes a semantic consensus. It then performs a critical verification step, forcing the model to re-examine the source text to confirm whether this information is truly absent. We evaluate our method against established baselines across diverse multi-hop and factual QA datasets. The results demonstrate that by guiding the model to justify its claims about missing information, our framework produces more accurate sufficiency judgments while clearly articulating any information gaps.
Similar Papers
Beyond Solving Math Quiz: Evaluating the Ability of Large Reasoning Models to Ask for Information
Artificial Intelligence
AI asks questions when it doesn't know.
Finding Answers in Thought Matters: Revisiting Evaluation on Large Language Models with Reasoning
Computation and Language
Makes AI math answers more trustworthy.
If We May De-Presuppose: Robustly Verifying Claims through Presupposition-Free Question Decomposition
Computation and Language
Makes AI answers more truthful and reliable.