Towards Agents That Know When They Don't Know: Uncertainty as a Control Signal for Structured Reasoning
By: Josefa Lia Stoisser , Marc Boubnovski Martell , Lawrence Phillips and more
Potential Business Impact:
Helps AI understand health data better.
Large language model (LLM) agents are increasingly deployed in structured biomedical data environments, yet they often produce fluent but overconfident outputs when reasoning over complex multi-table data. We introduce an uncertainty-aware agent for query-conditioned multi-table summarization that leverages two complementary signals: (i) retrieval uncertainty--entropy over multiple table-selection rollouts--and (ii) summary uncertainty--combining self-consistency and perplexity. Summary uncertainty is incorporated into reinforcement learning (RL) with Group Relative Policy Optimization (GRPO), while both retrieval and summary uncertainty guide inference-time filtering and support the construction of higher-quality synthetic datasets. On multi-omics benchmarks, our approach improves factuality and calibration, nearly tripling correct and useful claims per summary (3.0\(\rightarrow\)8.4 internal; 3.6\(\rightarrow\)9.9 cancer multi-omics) and substantially improving downstream survival prediction (C-index 0.32\(\rightarrow\)0.63). These results demonstrate that uncertainty can serve as a control signal--enabling agents to abstain, communicate confidence, and become more reliable tools for complex structured-data environments.
Similar Papers
Structured Uncertainty guided Clarification for LLM Agents
Computation and Language
Helps AI ask better questions to finish tasks.
The challenge of uncertainty quantification of large language models in medicine
Artificial Intelligence
Helps doctors know when AI is unsure about health advice.
Uncertainty-Driven Reliability: Selective Prediction and Trustworthy Deployment in Modern Machine Learning
Machine Learning (CS)
Helps computers know when they are wrong.