SGD-Based Knowledge Distillation with Bayesian Teachers: Theory and Guidelines

ICLR 2026 Conference Submission13017 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: knowledge distillation, SGD-based learning, Bayesian machine learning
TL;DR: We adopt a Bayesian perspective on KD to analyze student convergence under SGD and advocate using Bayesian deep learning models as teachers to improve student performance.
Abstract: Knowledge Distillation (KD) is a central paradigm for transferring knowledge from a large teacher network to a typically smaller student model, often by leveraging soft probabilistic outputs. While KD has shown strong empirical success in numerous applications, its theoretical underpinnings remain only partially understood. In this work, we adopt a Bayesian perspective on KD to rigorously analyze the convergence behavior of students trained with Stochastic Gradient Descent (SGD). We study two regimes: $(i)$ when the teacher provides the exact Bayes Class Probabilities (BCPs); and $(ii)$ supervision with noisy approximations of the BCPs. Our analysis shows that learning from BCPs yields variance reduction and removes neighborhood terms in the convergence bounds compared to one-hot supervision. We further characterize how the level of noise affects generalization and accuracy. Motivated by these insights, we advocate the use of Bayesian deep learning models, which typically provide improved estimates of the BCPs, as teachers in KD. Consistent with our analysis, we experimentally demonstrate that students distilled from Bayesian teachers not only achieve higher accuracies (up to +4.27\%), but also exhibit more stable convergence (up to 30\% less noise), compared to students distilled from deterministic teachers.
Primary Area: learning theory
Submission Number: 13017
Loading