Anomaly Detection Algorithms: Comparative Analysis and Explainability Perspectives

Published: 01 Jan 2023, Last Modified: 26 Aug 2024AusDM 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In order to detect outliers and potential anomalies in datasets, anomaly detection plays a pivotal role in identifying infrequent and irregular occurrences. The purpose of this paper is to examine and compare the effectiveness of prominent anomaly detection algorithms, including Isolation Forest, Local Outlier Factor (LOF), and One-Class Support Vector Machines (SVM). A variety of datasets are used in our assessment to evaluate key metrics such as precision, recall, F1-score, and overall accuracy. We also introduce innovative techniques that enhance the interpretability of these algorithms, shedding light on the underlying factors that contribute to anomaly detection. By providing insights into the attributes and behaviors associated with anomalies, our research empowers decision-makers to cultivate a profound comprehension of the identified anomalies, subsequently facilitating well-informed decisions grounded in the outcomes of anomaly detection. Through our meticulous comparative analysis and our dedication to unraveling the elements of explainability, we provide invaluable perspectives and pragmatic suggestions to facilitate effective anomaly detection in real-world scenarios.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview