On the Effects of Filtering Methods on Adversarial Timeseries Data

Published: 01 Jan 2023, Last Modified: 14 May 2025GeoPrivacy@SIGSPATIAL 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Adversarial machine learning is very well studied in image classification. On the other hand, other domains such as deep timeseries classification have not received similar levels of attention, leaving them disproportionately vulnerable. Specifically, adversarial defenses for deep timeseries classifiers have only been investigated in the context of attack detection. However, the proposed methods do not perform well and fail to generalize across attacks, affecting their real-world applicability. In this work we investigate adversarial defense via input data purification for deep timeseries classifiers. We subject clean and adversarially-perturbed univariate timeseries data to 4 simple filtering methods with a view to establishing whether such methods may potentially be used as purification-based adversarial defenses. In experiments involving 5 publicly-available datasets, we identify and compare the benefits of various filtering techniques. Thereafter we discuss our results and provide directions for further investigation.
Loading