Privacy-Preserving Financial Anomaly Detection via Federated Learning & Multi-Party Computation

Published: 01 Jan 2024, Last Modified: 06 Nov 2025ACSAC Workshops 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: One of the main goals of financial institutions (FIs) such as banks, mortgage companies, and electronic fund transfer facilitators is combating fraud and financial crime. To this end, FIs use sophisticated machine-learning models trained using customer data. The output of these models may be manually reviewed for critical use cases, e.g., determining the likelihood of a transaction being anomalous. While advanced machine learning models aid in anomaly detection, performance could be improved using additional customer data from other FIs. However, FIs may not have consent to share data, and data privacy regulations like GDPR and CCPA may prohibit sharing sensitive data. Combining data to jointly train accurate models is thus challenging.In this paper, we propose a privacy-preserving framework that allows FIs to jointly train accurate anomaly detection models. Our framework combines federated learning with multi-party computation and noisy aggregates inspired by differential privacy. This framework was a winning entry in the US/UK Privacy-Enhancing Technologies (PETs) Challenge, which considered an architecture where banks hold customer data and execute transactions through a central network. We show that our solution enables the network to train a highly accurate anomaly detection model while preserving customer data privacy. Experimental results on synthetic transaction data (7 million transactions for training and 700 thousand for inference) demonstrate that our approach improves the model’s Area Under the Precision–Recall Curve from 0.6 to 0.7. We also discuss the framework’s generalizability to similar scenarios.
Loading