Abstract: Insider threat analysis techniques based on machine learning provide convenient and effective automated detection of internally generated cyberattacks. When data are manipulated by adding slight perturbations, the threat intelligence models result in misclassifications of highly skewed class distribution with rare occurrences of events in insider threats. This paper proposes a generative model WGAN-GP conditioned by the class labels, referred to as CWGAN-GP, for insider threat analysis to create synthetic data samples for the rare malicious activities and shows that it generalizes well across different learning algorithms. Further, the robustness of the supervised algorithms to unknown inputs have not been investigated in any other works. This study explores how the synthetically created adversarial samples can increase the robustness of supervised models using adversarial training. We use a target classifier as threat model to generate one-step and iterative adversarial samples and perform a non-targeted test-time attack on the classifiers. We evaluate the robustness of various learning models against synthetic data from other data generation methods and demonstrate that the adversarial training using data generated from CWGAN-GP is less susceptible to adversarial attacks on insider threat classifiers using multiple versions of benchmark CMU CERT data set.
0 Replies
Loading