Unveiling the Black Box: Neural Cryptanalysis with XAI

Published: 01 Jan 2024, Last Modified: 09 Nov 2025SMC 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: At CRYPTO'19, Gohr[1] presented ResNet-based neural distinguishers (ND) for the round-reduced SPECK32/64 cipher. However, due to the black-box use of such deep learning models, it is hard for humans to understand why these distinguishers work, impeding advancements in cryptanalytic knowledge. In this work, we aim to effectively adapt eXplainable Artificial Intelligence (XAI) techniques, notably Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP), to gain a detailed understanding of the important features useful in Gohr's neural distinguishers.
Loading