Improving the Sensitivity of Backdoor Detectors via Class Subspace Orthogonalization

17 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Deep Learning, Adversarial Learning, Backdoor Detection
Abstract: Most post-training backdoor detection methods rely on attacked models exhibiting extreme outlier detection statistics for the target class of an attack, compared to non-target classes. However, these approaches may fail: (1) when some (non-target) classes are easily discriminable from all others, in which case they may _naturally_ achieve extreme detection statistics (e.g., decision confidence); and (2) when the backdoor is subtle, i.e., with its features weak relative to intrinsic class-discriminative features. A key observation is that the backdoor target class has contributions to its detection statistic from both the backdoor trigger _and_ from its intrinsic features, whereas non-target classes _only_ have contributions from their intrinsic features. To achieve more sensitive detectors, we thus propose to _suppress_ intrinsic features while optimizing the detection statistic for a given class. For non-target classes, such suppression will drastically reduce the achievable statistic, whereas for the target class the (significant) contribution from the backdoor trigger remains. In practice, we formulate a constrained optimization problem, leveraging a small set of clean examples from a given class, and optimizing the detection statistic while orthogonalizing with respect to the class's intrinsic features. We dub this approach ''class subspace orthogonalization'' (CSO). CSO can be ''plug-and-play'' applied to a wide variety of existing detectors. We demonstrate its effectiveness in improving several well-known detectors, comparing with a variety of baseline detectors, against a variety of attacks, on the CIFAR-10, GTSRB, and TinyImageNet domains. Moreover, to make the detection problem even more challenging, we also evaluate against a novel mixed clean/dirty-label poisoning attack that is more surgical and harder to detect than traditional dirty-label attacks. Finally, we evaluate CSO against an adaptive attack designed to defeat it, with promising detection results.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 9742
Loading