Keywords: Machine Unlearning, Spectral Analysis, Orthogonal Subspace Representation, Loss Function Design
TL;DR: Our proposed method mainly explore orthogonal subspaces capabilities and MSE loss potential to improve class forgetting task.
Abstract: Machine unlearning supports the "right to be forgotten" by removing the influence of designated classes without requiring full retraining. We introduce geometry-aware classifier heads that enforce intra-class alignment and inter-class orthogonality, embedding features as a union of one-dimensional orthogonal subspaces. Coupled with state-of-the-art unlearning methods and an error-maximizing noise scheme for data-independent updates, this structure enables selective suppression of the forgotten class while preserving classification accuracies for retained classes. To assess genuine forgetting rather than mere misclassification, we propose a spectral-angle test that certifies the removal of the forgotten subspace and complements standard metrics — unlearning/retention accuracy (UA/RA), their test-set counterparts (TUA/TRA), and a membership-inference measure (MIA). We further study loss-head pairings by contrasting cross-entropy (CE) and mean-squared error (MSE) under two operating regimes — \emph{Quick} and \emph{Optimum} — reflecting different compute budgets. On CIFAR-10 in a leave-one-class-out protocol (100 trials), the framework achieves near-perfect unlearning (UA $\leq$ 0.9\%) with high retention (RA $\approx$ 95--96\%) and consistent generalization to held-out data (low TUA, high TRA), often matching retraining baselines while reducing computational cost. These results show that enforcing subspace structure and choosing an appropriate loss yields robust and selective forgetting with strong retention and privacy.
Supplementary Material: pdf
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 15458
Loading