Robust Graph Autoencoder-Based Detection of False Data Injection Attacks Against Data Poisoning in Smart Grids
Abstract: Machine learning-based detection of false data injection attacks (FDIAs) in smart grids relies on labeled measurement data for training and testing. The majority of existing detectors are developed assuming that the adopted datasets for training have correct labeling information. However, such an assumption is not always valid as training data might include measurement samples that are incorrectly labeled as benign, namely, adversarial data poisoning samples, which have not been detected before. Neglecting such an aspect makes detectors susceptible to data poisoning. Our investigations revealed that detection rates (DRs) of existing detectors significantly deteriorate by up to $\text{9}\text{--}\text{29}{\%}$ when subject to data poisoning in generalized and topology-specific settings. Thus, we propose a generalized graph neural network-based anomaly detector that is robust against FDIAs and data poisoning. It requires only benign datasets for training and employs an autoencoder with Chebyshev graph convolutional recurrent layers with attention mechanism to capture the spatial and temporal correlations within measurement data. The proposed convolutional recurrent graph autoencoder model is trained and tested on various topologies (from 14, 39, and 118-bus systems). Due to such factors, it yields stable generalized detection performance that is degraded by only $\text{1.6}\text{--}\text{3.7}{\%}$ in DR against high levels of data poisoning and unseen FDIAs in unobserved topologies.
Loading