Generative Modeling Reinvents Supervised Learning: Label Repurposing with Predictive Consistency Learning

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Predicting labels directly from data has been the standard in label learning tasks, e.g., supervised learning, where models often prioritize feature compression and extraction from inputs under the assumption that label information is less complex. However, recent prediction tasks often face predicting complex labels, exacerbating the challenge of learning mappings from learned features to high-fidelity label representations. To this end, we draw inspiration from the consistency training concept in generative consistency models and propose predictive consistency learning (PCL), a novel learning paradigm that decomposes the full label information into a progressive learning procedure, mitigating the label capture challenge. Besides data inputs, PCL additionally receives input from noise-perturbed labels as an additional reference, pursuing predictive consistency across different noise levels. It simultaneously learns the relationship between latent features and a spectrum of label information, which enables progressive learning for complex predictions and allows multi-step inference analogous to gradual denoising, thereby enhancing the prediction quality. Experiments on vision, text, and graph tasks show the superiority of PCL over conventional supervised training in complex label prediction tasks.
Lay Summary: In machine learning, labels are typically viewed as simple answers that models aim to predict from data. However, many real-world tasks involve complex labels that contain richer information beyond just a final answer. This work explores a fundamental question: when labels hold valuable information, can they be used to aid learning instead of merely serving as prediction targets? We introduce a novel approach to unlock this hidden value in labels by treating them not only as targets but also as informative references during training. Our method, Predictive Consistency Learning (PCL), inspired by generative consistency models, breaks down label information into a progressive learning process. Besides data inputs, PCL additionally receives input from noise-perturbed labels as an additional reference, pursuing predictive consistency across different noise levels. This strategy shows promise across diverse data types such as images, text, and graphs. By demonstrating the effectiveness of incorporating label information into model input for reference, this study opens new avenues for rethinking how labels are utilized in machine learning.
Link To Code: https://github.com/Thinklab-SJTU/predictive-consistency-learning
Primary Area: General Machine Learning->Supervised Learning
Keywords: Supervised Learning, Generative Models, Consistency Models
Submission Number: 3151
Loading