GUARD: General Unsupervised Adversarial Robust Defense for Deep Multi-View Clustering via Information Bottleneck
Keywords: multi-view learning, clustering, deep learning, adversarial defense
TL;DR: A New Adversarial Defense method for Deep Complete and Incomplete Multi-View Clustering via Information Bottleneck.
Abstract: The integrity of Deep Multi-View Clustering (DMVC) is fundamentally challenged by adversarial attacks, which corrupt the learning process by injecting a malicious, task-misaligned informational signal. Existing adversarial defense methods for DMVC are model-specific, non-transferable, and limited to complete multi-view scenarios. To address this, we introduce Multi-view Adversarial Purification (MAP), a novel defense paradigm that reframes unsupervised purification as a principled, information-theoretic problem of signal separation. We present GUARD, the first framework to operationalize the MAP paradigm, which instantiates the principles of the Multi-View Information Bottleneck. GUARD is designed to satisfy a dual objective: 1) it maximizes informational sufficiency with respect to the benign data, ensuring the preservation of all task-relevant information; and 2) it enforces purity against the adversarial signal by creating a bottleneck to discard it. Crucially, GUARD achieves this duality not with an explicit penalty term, but through a unique self-supervisory design where the information bottleneck emerges as a property of the optimization dynamics. Extensive experiments validate that our model-agnostic and unsupervised framework effectively purifies adversarial data, significantly enhancing the robustness of a wide range of DMVC models.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 7535
Loading