A LSTM Approach to Detection of Autonomous Vehicle HijackingDownload PDF

12 Oct 2018 (modified: 05 May 2023)NIPS 2018 Workshop MLITS SubmissionReaders: Everyone
Abstract: In the recent decades, automotive research has been focused on creating a driverless future. Autonomous vehicles are expected to take over tasks which are dull, dirty and dangerous for humans (3Ds of robotization). However, augmented autonomy increases reliance on the robustness of the system. Autonomous vehicle systems are heavily focused on data acquisition in order to perceive the driving environment accurately. In the future, a typical autonomous vehicle data ecosystem will include data from internal sensors, infrastructure, communication with nearby vehicles, and other sources. Physical faults/malicious attack or a misbehaving vehicle can result in the incorrect perception of the environment, which can in turn lead to task failure or accidents. Anomaly detection is hence expected to play a critical role in improving the security and efficiency of autonomous and connected vehicles. Anomaly detection can be simply defined as a way of identifying unusual or unexpected events and/or measurements. In this paper, we focus on the specific case of malicious attack/hijacking of the system which results in unpredictable evolution of the autonomous vehicle. We use a Long Short-Term Memory (LSTM) network for anomaly/fault detection. It is first trained on non-abnormal data to understand the system's baseline performance and behaviour, monitored through four vehicle control parameters namely velocity, acceleration, jerk and steering rotation. The model is next used to predict over a number of future time steps and an alarm is raised as soons as the observed behaviour of the autonomous car significantly deviates from the prediction. The relevance of this approach is supported by numerical experiments based on data produced by an autonomous car simulator, capable of generating attacks on the system.
Keywords: on-line hijacking detection, LSTM networks
3 Replies

Loading