Functionality-Verification Attack Framework Based on Reinforcement Learning Against Static Malware Detectors

Published: 01 Jan 2024, Last Modified: 04 Nov 2025IEEE Trans. Inf. Forensics Secur. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Current adversarial attacks are capable of achieving effective evasion against machine learning-based static malware detectors. However, these methods have problems such as long example generation times and lack of functionality validation. To address these issues, we propose an enhanced adversarial example generation framework based on reinforcement learning. This framework improves the example generation efficiency by redesigning the state space and action space employed by the agents. Furthermore, we incorporate the functionality of adversarial example validation for the first time as a component of the example generation process within the framework, significantly enhancing the efficiency of verification. Multiple popular detectors are chosen as victim models to assess the effectiveness of the attack framework. The vulnerabilities of these detectors are elucidated through explanations of the detectors and the analysis of attack results. Finally, a policy distillation approach based on transfer learning is employed to enhance the generalizability of the framework. By learning expert knowledge from agents trained against different detectors, the framework could launch effective attacks against various detectors. The effectiveness of the proposed framework is verified through experiment results.
Loading