Conformal Margin Risk Minimization: An Envelope Framework for Robust Learning under Label Noise

Published: 03 Feb 2026, Last Modified: 03 Feb 2026AISTATS 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose CMRM, an envelope framework using confidence margins and conformal quantiles to enhance robustness of existing methods to arbitrary label noise under weaker assumptions.
Abstract: Training reliable classifiers under label noise is a challenging task. Existing methods often rely on restrictive assumptions about the noise distribution, model design, or access to clean data. Such assumptions rarely hold in practice, especially under severe or heterogeneous noise. We propose Conformal Margin Risk Minimization (CMRM), an uncertainty-aware envelope framework to improve the robustness of prior methods with noisy labeled data. Specifically, CMRM computes the confidence margin as the gap between confidence scores of observed label and other labels, and then a conformal quantile estimated over a batch of examples provides a statistically valid proxy for the set-level quantile. Minimizing the conformal margin risk allows the training to focus on low uncertainty (high margin) samples while filtering out high uncertainty (low margin) samples below the quantile, as mislabeled samples. We derive a learning bound for CMRM under arbitrary label noise with weaker assumptions than prior work. Experiments show that CMRM consistently improves accuracy and robustness of prior methods across different classification benchmarks without prior knowledge of noise.
Submission Number: 1864
Loading