Abstract: Multi-kernel clustering (MKC) has emerged as a powerful method for capturing diverse data patterns, offering robust and generalized representations of data structures. However, the increasing deployment of MKC in real-world applications raises concerns about its vulnerability to adversarial perturbations. While adversarial robustness has been extensively studied in other domains, its impact on MKC remains largely unexplored. In this paper, we address the challenge of assessing the adversarial robustness of MKC methods in a black-box setting. Specifically, we propose *AdvMKC*, a novel reinforcement-learning-based adversarial attack framework designed to inject imperceptible perturbations into data and mislead MKC methods. AdvMKC leverages proximal policy optimization with an advantage function to overcome the instability of clustering results during optimization. Additionally, it introduces a generator-clusterer framework, where a generator produces adversarial perturbations, and a clusterer approximates MKC behavior, significantly reducing computational overhead. We provide theoretical insights into the impact of adversarial perturbations on MKC and validate these findings through experiments. Evaluations across seven datasets and eleven MKC methods (seven traditional and four robust) demonstrate AdvMKC's effectiveness, robustness, and transferability.
Lay Summary: Multi-Kernel Clustering (MKC) is a powerful tool for finding patterns in complex data, but its vulnerability to adversarial attacks is not well understood. These attacks make tiny, hard-to-detect changes to the data that can fool the clustering results. As MKC becomes more common in real-world applications, this weakness becomes a serious concern. To tackle this issue, we create AdvMKC, a new framework that uses reinforcement learning to perform black-box adversarial attacks on MKC. Our method adds small changes to the data to mislead the clustering process. We design it to be efficient by using a generator to create these changes and a clusterer to mimic MKC behavior. We test AdvMKC on seven datasets and eleven MKC methods and find that it works well, even on robust models. Our approach helps uncover hidden risks in MKC and offers a way to test and improve its security in practical settings.
Link To Code: https://github.com/csyuhao/AdvMKC-Official
Primary Area: General Machine Learning->Clustering
Keywords: Multi-Kernel Clustering; Adversarial Attacks
Submission Number: 4592
Loading