ExAI5G: A Logic-Based Explainable AI Framework for Intrusion Detection in 5G Networks

ICLR 2026 Conference Submission16967 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Explainable AI, Intrusion detection, Logic-based rule extraction, Integrated Gradients
Abstract: Intrusion detection systems (IDS) for 5G networks must deal with complex, high-volume traffic. Although opaque "black-box" models can achieve high accuracy, their lack of transparency hinders trust and effective operational response. We propose \emph{ExAI5G}, a framework that prioritizes interpretability by integrating a Transformer-based deep learning IDS with logic-based explainable AI (XAI) techniques. The framework uses Integrated Gradients to attribute feature importance and extracts a surrogate decision tree to derive logical rules. We introduce a novel evaluation methodology for LLM-generated explanations, using a powerful evaluator LLM to assess \textbf{actionability} and measuring their \textbf{semantic similarity} and \textbf{faithfulness}. On a 5G IoT intrusion dataset, our system achieves \textbf{99.9\%} accuracy and a \textbf{0.854} macro F1-score, demonstrating strong performance. More importantly, we extract 16 logical rules with \textbf{99.7\%} fidelity, making the model's reasoning fully transparent. The evaluation shows that modern LLMs can generate explanations with perfect faithfulness and actionability, proving that it is possible to build a trustworthy and effective IDS without sacrificing performance for the sake of marginal gains from an opaque model.
Primary Area: interpretability and explainable AI
Submission Number: 16967
Loading