FAME: $\underline{F}$ormal $\underline{A}$bstract $\underline{M}$inimal $\underline{E}$xplanation for neural networks

ICLR 2026 Conference Submission21947 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: abductive explanations, abstract interpretation, robustness, NN verification
TL;DR: We introduce FAME, a novel method grounded in abstract interpretation that efficiently generates formal, minimal explanations for large neural networks by leveraging dedicated perturbation domains.
Abstract: We propose $\textbf{FAME}$ (Formal Abstract Minimal Explanations), a new class of abductive explanations grounded in abstract interpretation. FAME is the first method to scale to large neural networks while reducing explanation size. Our main contribution is the design of dedicated perturbation domains that eliminate the need for traversal order. FAME progressively shrinks these domains and leverages LiRPA-based bounds to discard irrelevant features, ultimately converging to a $\textbf{formal abstract minimal explanation}$. To assess explanation quality, we introduce a procedure that measures the worst-case distance between an abstract minimal explanation and a true minimal explanation. This procedure combines adversarial attacks with an optional $VERI{\large X}+$ refinement step. We benchmark FAME against $VERI{\large X}+$ and demonstrate consistent gains in both explanation size and runtime on medium- to large-scale neural networks.
Supplementary Material: pdf
Primary Area: interpretability and explainable AI
Submission Number: 21947
Loading