Oscillations Make Neural Networks Robust to Quantization

Published: 02 Dec 2025, Last Modified: 02 Dec 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We challenge the prevailing view that weight oscillations observed during Quantization Aware Training (QAT) are merely undesirable side-effects and argue instead that they are an essential part of QAT. We show in a univariate linear model that QAT results in an additional loss term that causes oscillations by pushing weights away from their nearest quantization level. Based on the mechanism from the analysis, we then derive a regularizer that induces oscillations in the weights of neural networks during training. Our empirical results on ResNet-18 and Tiny Vision Transformer, evaluated on CIFAR-10 and Tiny ImageNet datasets, demonstrate across a range of quantization levels that training with oscillations followed by post-training quantization (PTQ) is sufficient to recover the performance of QAT in most cases. With this work we provide further insight into the dynamics of QAT and contribute a novel insight into explaining the role of oscillations in QAT which until now have been considered to have a primarily negative effect on quantization.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/saintslab/osc_reg
Assigned Action Editor: ~Tatiana_Likhomanenko1
Submission Number: 5571
Loading