Abstract: We develop a new method to detect anomalies within time series, which is essential in many
application domains, reaching from self-driving cars, finance, and marketing to medical
diagnosis and epidemiology. The method is based on self-supervised deep learning that has
played a key role in facilitating deep anomaly detection on images, where powerful image
transformations are available. However, such transformations are widely unavailable for time
series. Addressing this, we develop Local Neural Transformations (LNT), a method learning
local transformations of time series from data. The method produces an anomaly score for
each time step and thus can be used to detect anomalies within time series. We prove in
a theoretical analysis that our novel training objective is more suitable for transformation
learning than previous deep Anomaly detection (AD) methods. Our experiments demonstrate
that LNT can find anomalies in speech segments from the LibriSpeech data set and better
detect interruptions to cyber-physical systems than previous work. Visualization of the
learned transformations gives insight into the type of transformations that LNT learns.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Addressed the reviewers' comments.
Changes are marked in blue.
Assigned Action Editor: ~Cedric_Archambeau1
Submission Number: 531
Loading