Telecom Fraud Detection Based on Feature Binning and Autoencoder

Published: 01 Jan 2023, Last Modified: 12 Feb 2025ICDM 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the rapid development of modern communication technology, telecom fraud has been increasing year by year. If fraudsters can be accurately identified before they carry out their scams, it can not only protect people from potential losses but also increase trust in telecom operators. Therefore, in recent years, telecom fraud detection has garnered widespread attention in both academia and industry. Although existing methods for telecom fraud detection have achieved good performance, there are still many unresolved issues for real-world telecom operators. First, existing methods only focus on a single telecom scenario, while real-world telecom scenarios are diverse. Utilizing the characteristics of these different telecom scenarios can improve the effectiveness of telecom fraud detection. Second, existing methods usually use Graph Neural Networks (GNNs) to aggregate neighbor information. However, real-world telecom operators can’t obtain information of users from other operators, resulting in the lacking destination node attributes, which degenerates the performance of GNNs. To address the above issues, in this paper, we propose a new model for Telecom Fraud Detection Based on Feature binning and Autoencoder (TFD-FA). In TFD-FA, a feature binning framework is designed to partition users into different telecom scenarios in order to reflect their unique characteristics. An autoencoder component is also designed to aggregate neighbor information. Furthermore, an imbalance classifier component is constructed to solve the problem of the significantly lower number of fraudsters compared to normal users. Extensive experiments in a real-world dataset demonstrate the effectiveness of TFD-FA, which outperforms the compared baseline models.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview