Keywords: discrete optimization, low-precision training, quantised training, stochastic gradient descent, convergence analysis, multinomial updates, deep learning efficiency
TL;DR: We present a general framework for convergence of discrete update schemes in deep learning, with a multinomial update rule as a concrete example.
Abstract: Modern deep learning models require immense computational resources, motivating research into low-precision training.
Quantised training addresses this by representing training components in low-bit integers, but typically relies on discretising real-valued updates.
We introduce an alternative approach where the update rule itself is discrete, avoiding the quantisation of continuous updates by design.
We establish convergence guarantees for a general class of such discrete schemes, and present a multinomial update rule as a concrete example, supported by empirical evaluation. This perspective opens new avenues for efficient training, particularly for models with inherently discrete structure.
Submission Number: 37
Loading