Improving ML attacks on LWE with data repetition and stepwise regression

ICLR 2026 Conference Submission19662 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Cryptanalysis, Transformers, LWE, Learning with errors
Abstract: ML attacks on learning with errors (LWE) with binary or small secrets only succeed on LWE settings with very simple secrets. For example, they can recover secrets with up to three non-zero bits when models are trained on not-reduced LWE data, and three non-zero bits in the ''cruel region (Nolte et al., 2024)'' when BKZ pre-processing is applied. We show that larger training sets and the use of repeated examples in the training data allow the recovery of denser secrets. We empirically observe a power-law relationship between model based attempts to recover the secrets, dataset size and repeated examples. We introduce a stepwise regression technique to recover the ``cool bits'' of the secret. Overall, these techniques allow for the recovery of denser binary secrets: up to Hamming weight $70$ (and $8$ cruel bits) for dimension $256$ $\log_2 q=20$ and $75$ (and $7$ cruel bits) for dimension $512$ $\log_2 q=41$ (vs $33$ and $63$ Hamming weight and $3$ cruel bits in previous works). We also demonstrate our methods' effectiveness on denser ternary secrets, showing a substantial improvement over prior work.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 19662
Loading