Characterizing Anomalies with Explainable ClassifiersDownload PDF

05 Oct 2022, 00:12 (modified: 16 Nov 2022, 02:23)NeurIPS 2022 Workshop DistShift PosterReaders: Everyone
Keywords: Data-drift, Anomaly Detection, Explainability, SHAP
TL;DR: A novel SHAP-based approach to identifying and characterizing anomalous groups of points in test data
Abstract: As machine learning techniques are increasingly used to make societal-scale decisions, model performance issues stemming from data-drift can result in costly consequences. While methods exist to quantify data-drift, a further classification of drifted points into groups of similarly anomalous points can be helpful for practitioners as a means to combating drift (e.g. by providing context about how/where in the data pipeline shift might be introduced). We show how such characterization is possible by making use of tools from the model explainability literature. We also show how simple rules can be extracted to generate database queries for anomalous data and detect anomalous data in the future.
1 Reply

Loading