Transition Machine Teaching

TMLR Paper5959 Authors

22 Sept 2025 (modified: 26 Sept 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Machine teaching endeavors to minimize the divergence between the teacher and learner within the model parameter space to facilitate the identification of critical data. However, conventional methods for achieving this typically rely on closed-form function operations, which often introduce inconsistencies in parameter spaces. Theoretically, these inconsistencies diminish the interpretability of the learner and reduce it to a black-box system. This paper advocates a paradigm shift in machine teaching, transitioning from \emph{conventional direct parameter space matching} toward a more nuanced approach focused on \emph{aligning teacher’s parameter space with learner’s data distribution.} Specifically, we propose a novel framework for projecting the learner’s data distribution onto the gradient space of the converged model. This projection facilitates the quantification of uncertainty within the gradient transition space, enabling the identification and elimination of redundant distributions while sampling the essential coverage of the trust distribution. Utilizing the inherent unbiased properties of the teacher’s parameter space, we further propose regulatory constraints to systematically guide the optimization of the learner’s data distribution. Theoretical analysis and comprehensive results conducted across diverse scenarios substantiate the efficacy of this transition.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Bruno_Loureiro1
Submission Number: 5959
Loading