Quantify Uncertainty and Hallucination in Foundation Models: The Next Frontier in Reliable AI

Published: 03 Dec 2024, Last Modified: 03 Dec 2024ICLR 2025 Workshop ProposalsEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Uncertainty in representation learning, Transparency, Generative AI and large language models
Abstract: *How can we trust large language models (LLMs) when they generate text with confidence, but sometimes hallucinate or fail to recognize their own limitations*? As foundation models like LLMs and multimodal systems become pervasive across high-stakes domains—from healthcare and law to autonomous systems—the need for uncertainty quantification (UQ) is more critical than ever. Uncertainty quantification provides a measure of how much confidence a model has in its predictions or generations, allowing users to assess when to trust the outputs and when human oversight may be needed. This workshop aims to focus on the question of UQ and hallucination in the modern LLMs and multimodal systems and explore the open questions in the domain.
Submission Number: 42
Loading