Keywords: Soft Quantization, Trainable Quantization, Input Compression, Tiny Machine Learning, Split Inference
TL;DR: A trainable quantization method is proposed to learn to compress inputs of a neural networks for a split inference approach on resource constrained and cloud devices.
Abstract: The growing demand for machine learning applications in the context of the Internet of Things calls for new approaches to optimize the use of limited compute and memory resources.
Despite significant progress that has been made w.r.t. reducing model sizes and improving efficiency, many applications still require remote servers to provide the required resources.
However, such approaches rely on transmitting data from edge devices to remote servers, which may not always be feasible due to bandwidth, latency, or energy constraints.
We propose a task-specific, trainable feature quantization layer that compresses the input features of a neural network. This can significantly reduce the amount of data that needs to be transferred from the device to a remote server.
In particular, the layer allows each input feature to be quantized to a user-defined number of bits, enabling a simple on-device compression at the time of data collection.
The layer is designed to approximate step functions with sigmoids, enabling trainable quantization thresholds.
By concatenating outputs from multiple sigmoids, introduced as bitwise soft quantization, it achieves trainable quantized values when integrated with a neural network.
We compare our method to full-precision inference as well as to several quantization baselines.
Experiments show that our approach outperforms standard quantization methods, while maintaining accuracy levels close to those of full-precision models.
In particular, depending on the dataset, compression factors of $5\times$ to $16\times$ can be achieved compared to $32$-bit input without significant performance loss.
Submission Number: 102
Loading