Guaranteeing Robustness Against Real-World Perturbations In Time Series Classification Using Conformalized Randomized Smoothing
Keywords: Randomized smoothing, Conformal Prediction, Uncertainty Quantification, Robust Machine Learning, Domain Shifts
TL;DR: Our paper introduces a novel method combining randomized smoothing and conformal prediction to enhance robustness against domain shifts and perturbations, proving particularly effective in time series classification
Abstract: Certifying the robustness of machine learning models against domain shifts and input space perturbations is crucial for many applications, where high risk decisions are based on the model's predictions. Techniques such as randomized smoothing have partially addressed this issues with a focus on adversarial attacks in the past. In this paper, we generalize randomized smoothing to arbitrary transformations and extend it to conformal prediction. The proposed ansatz is demonstrated on a time series classifier connected to an automotive use case. We meticulously assess the robustness of smooth classifiers in environments subjected to various degrees and types of time series native perturbations and compare it against standard conformal predictors. The proposed method consistently offers superior resistance to perturbations, maintaining high classification accuracy and reliability. Additionally, we are able to bound the performance on new domains via calibrating generalization with configuration shifts in the training data. In combination, conformalized randomized smoothing may offer a model agnostic approach to construct robust classifiers tailored to perturbations in their respective applications - a crucial capability for AI assurance argumentation.
List Of Authors: Franco, Nicola and Spiegelberg, Jakob and Lorenz, Jeanette Miriam and Guennemann, Stephan
Latex Source Code: zip
Signed License Agreement: pdf
Submission Number: 112
Loading