Towards Agents That Know When They Don't Know: Uncertainty as a Control Signal for Structured Reasoning
Keywords: Large Language Models (LLMs); Agentic Systems; Uncertainty Quantification; Structured Data Reasoning; Reinforcement Learning (RL); Multi-omics
TL;DR: By making LLM agents aware of their own uncertainty, we can get more reliable, trustworthy, and useful outputs in complex structured data settings
Abstract: Large language model (LLM) agents are increasingly deployed in structured biomedical data environments, yet they often produce fluent but overconfident outputs when reasoning over complex multi-table data. We introduce an uncertainty-aware agent for query-conditioned multi-table summarization that leverages two complementary signals: (i) retrieval uncertainty—entropy over multiple table-selection rollouts—and (ii) summary uncertainty—combining self-consistency and perplexity. Summary uncertainty is incorporated into reinforcement learning (RL) with Group Relative Policy Optimization (GRPO), while both retrieval and summary uncertainty guide inference-time filtering and support the construction of higher-quality synthetic datasets.
On multi-omics benchmarks, our approach improves factuality and calibration, just less than tripling correct and useful claims per summary (3.0→8.4 internal; 3.6→9.9 cancer multi-omics) and substantially improving downstream survival prediction (C-index 0.32→0.63). These results demonstrate that uncertainty can serve as a control signal—enabling agents to abstain, communicate confidence, and become more reliable tools for complex structured-data environments.
Archival Option: The authors of this submission do *not* want it to appear in the archival proceedings.
Submission Number: 102
Loading