Reassessing Fairness: A Reproducibility Study of NIFA’s Impact on GNN Models

Published: 17 Jun 2025, Last Modified: 17 Jun 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Graph Neural Networks (GNNs) have shown strong performance on graph-structured data but raise fairness concerns by amplifying existing biases. The Node Injection-based Fairness Attack (NIFA) (Luo et al., 2024) is a recently proposed gray-box attack that degrades group fairness while preserving predictive utility. In this study, we reproduce and evaluate NIFA across multiple datasets and GNN architectures. Our findings confirm that NIFA consistently degrades fairness—measured via Statistical Parity and Equal Opportunity—while maintaining utility on classical GNNs. However, claims of NIFA’s superiority over existing fairness and utility attacks are only partially supported due to limitations in baseline reproducibility. We further extend NIFA to accommodate multi-class sensitive attributes and evaluate its behavior under varying levels of graph homophily. While NIFA remains effective in multi-class contexts, its impact is more sensitive in mixed and highly homophilic graphs. Although this is not a comprehensive validation of all NIFA claims, our work provides targeted insights into its reproducibility and generalizability across fairness-sensitive scenarios. The codebase is publicly available at: https://github.com/sjoerdgunneweg/Reassessing-NIFA.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We made the submission camera ready.
Code: https://github.com/sjoerdgunneweg/Reassessing-NIFA
Assigned Action Editor: ~Sheng_Li3
Submission Number: 4291
Loading