Reproducibility Study of "Robust Fair Clustering: A Novel Fairness Attack and Defense Framework"

Published: 15 Apr 2024, Last Modified: 17 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Event Certifications: reproml.org/MLRC/2023/Journal_Track
Abstract: Clustering algorithms play a pivotal role in various societal applications, where fairness is paramount to prevent adverse impacts on individuals. In this study, we revisit the robustness of fair clustering algorithms against adversarial attacks, affirming previous research findings that highlighted their susceptibility and the resilience of the Consensus Fair Clustering (CFC) model. Beyond reproducing these critical results, our work extends the original analysis by refining the codebase for enhanced experimentation, introducing additional metrics and datasets to deepen the evaluation of fairness and clustering quality, and exploring novel attack strategies, including targeted attacks on new metrics and a combined approach for balance and entropy as well as an ablation study. These contributions validate the original claims about the vulnerability and resilience of fair clustering algorithms and broaden the research landscape by offering a more comprehensive toolkit for assessing adversarial robustness in fair clustering.
Certifications: Reproducibility Certification
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/iasonsky/FACT-2024
Supplementary Material: zip
Assigned Action Editor: ~Tongliang_Liu1
Submission Number: 2250
Loading