Keywords: Conformal Prediction, Large Language Models, Survey
Abstract: The rapid evolution of large language models (LLMs) and natural language processing (NLP) has raised growing concerns about how to quantify and communicate uncertainty across diverse tasks.
Conformal prediction (CP) offers a distribution-free, model-agnostic framework for constructing uncertainty sets with finite-sample guarantees under mild assumptions, making it particularly attractive for black-box LLM deployments. Existing surveys (e.g., https://doi.org/10.1162/tacl_a_00715) summarize classical CP methodology and early NLP applications, but recent progress in LLM-centric settings, including open-ended generation, reasoning, multimodal systems, and factuality, has rapidly expanded both the technical toolkit and the evaluation protocols used to validate reliability. This survey synthesizes these new developments by organizing recent CP methods for LLMs and mapping them to representative NLP-related applications, with an emphasis on how different design choices translate into practical uncertainty statements.
We conclude by highlighting emerging challenges and open directions for making CP a dependable component of reliable LLM deployment.
Paper Type: Long
Research Area: Summarization
Research Area Keywords: conformal prediction,large language model,survey
Contribution Types: Position papers, Surveys
Languages Studied: large language model
Submission Number: 8124
Loading