Neural Distinguishers on tt TinyJAMBU-128 and tt GIFT-64Open Website

Published: 01 Jan 2022, Last Modified: 13 May 2023ICONIP (5) 2022Readers: Everyone
Abstract: In CRYPTO 2019, Gohr first introduced a pioneering attempt, and successfully applied neural differential distinguisher ( $$\mathcal {NDD}$$ ) based differential cryptanalysis against Speck32/64, achieving higher accuracy than the pure differential distinguishers and reducing the data complexity of chosen plaintexts. Inspired by Gohr’s work, we attempt to use neural network to analyze the cipher $$\texttt {TinyJAMBU-128}$$ which is one of ten NIST’s lightweight cryptography standardization process finalists. Based on MLP, we construct a Neural Single Differential Distinguisher ( $$\mathcal {NSDD}$$ ), on which we get an accuracy of $$99.58\%$$ with 32-bit associated data(AD). The experiment results show that $$\texttt {TinyJAMBU-128}$$ with 32-bit AD is vulnerable to differential attacks. In this article, we also explore $$\texttt {GIFT-64}$$ . Based on Long Short-Term Memory (LSTM), we construct $$\mathcal {NSDD}$$ and Neural Polytopic Differential Distinguisher( $$\mathcal {NPDD}$$ ). For 4-,5-,6-round $$\texttt {GIFT-64}$$ , we get an accuracy of $$99.73\%, 85.08\%$$ , $$57.54\%$$ with $$\mathcal {NPDD}$$ and obtain an accuracy of $$97.97\%, 75.11\%, 57.25\%$$ with $$\mathcal {NSDD}$$ respectively. Compared with Yadav’s research in which MLP is used, we get a higher acccuracy with only $$\frac{1}{4}$$ train dataset. It shows that our model is better than Yadav’s.
0 Replies

Loading