Achieving High-Fidelity Explanations for Risk Exposition Assessment in the Cybersecurity Domain

Published: 01 Jan 2023, Last Modified: 09 Apr 2025eCrime 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Understanding AI-driven systems has become fundamental, particularly when these systems are employed for critical decision-making, as is the case in the field of cybersecurity. In this regard, explainability has been extensively advocated as a cornerstone to comprehend the model, thereby enhancing trust and accountability in data-driven systems. Through the successful use-case of a risk exposure assessment framework which aims to proactively reduce an organization's attack surface, we propose an explainable proxy which is founded on the generation of systematic evaluations of explanations. The proposed framework offers a swift and dependable method for assessing explanations specifically tailored for the cybersecurity domain.
Loading