Towards Agents That Know When They Don't Know: Uncertainty as a Control Signal for Structured Reasoning
Keywords: LLM agents, uncertainty quantification, reinforcement learning, GRPO, structured reasoning, synthetic dataset quality
TL;DR: An uncertainty-aware LLM table agent uses retrieval entropy and output uncertainty as control signals for RL training and inference-time filtering, producing more factual, calibrated multi-table summaries and better downstream predictions.
Abstract: Large language model (LLM) agents are increasingly deployed in structured biomedical data environments, yet they often produce fluent but overconfident outputs when reasoning over complex multi-table data. We introduce an uncertainty-aware agent for query-conditioned multi-table summarization that leverages two complementary signals: (i) retrieval uncertainty—entropy over multiple table-selection rollouts—and (ii) summary uncertainty—combining self-consistency and perplexity. Summary uncertainty is incorporated into reinforcement learning (RL) with Group Relative Policy Optimization (GRPO), while both retrieval and summary uncertainty guide inference-time filtering and support the construction of higher-quality synthetic datasets.
On multi-omics benchmarks, our approach improves factuality and calibration, just less than tripling correct and useful claims per summary (3.0→8.4 internal; 3.6→9.9 multi-omics) and substantially improving downstream survival prediction (C-index 0.32→0.63). These results demonstrate that uncertainty can serve as a control signal—enabling agents to abstain, communicate confidence, and become more reliable tools for complex structured-data environments.
Primary Area: reinforcement learning
Submission Number: 8029
Loading