Keywords: Symmetric Cryptanalysis, Neural Distinguisher
TL;DR: This paper constructs a theoretical bridge between learning theory and deep learning-based symmetric cryptanalysis, and achieve state-of-the-art results on relevant tasks guided by the theory.
Abstract: The success of deep learning in cryptanalysis has been largely demonstrated empirically, yet it lacks a foundational theoretical framework to explain its performance. We bridge this gap by establishing a formal learning-theoretic framework for symmetric cryptanalysis. Specifically, we introduce the Coin-Tossing model to abstract the process of constructing distinguishers and propose a unified algebraic representation, the Conjunctive Parity Form (CPF), to capture a broad class of traditional distinguishers without needing domain-specific details. Within this framework, we prove that any concept in the CPF class is learnable in sub-exponential time in the setting of symmetric cryptanalysis. Guided by insights from our complexity analysis, we demonstrate preprocessing the data with a flexible output generating function can simplify the learning task for neural networks. This approach leads to a state-of-the-art practical result: the first improvement on the deep learning-based distinguisher for $S{\scriptsize PECK}$32/64 since 2019, where we enhance accuracy and extend the attack from 8 to a record 9 rounds.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 10849
Loading