EuclidNets: An Alternative Operation for Efficient Inference of Deep Learning Models

Published: 22 Dec 2022, Last Modified: 06 Apr 2026Springer Nature (SN) Computer ScienceEveryonearXiv.org perpetual, non-exclusive license
Abstract: With the advent of deep learning application on edge devices, researchers actively try to optimize their deployments on low-power and restricted memory devices. There are established compression method such as quantization, pruning, and architecture search that leverage commod- ity hardware. Apart from conventional compression algorithms, one may redesign the operations of deep learning models that lead to more efficient implementation. To this end, we propose EuclidNet, a compression method, designed to be implemented on hardware which replaces multiplication, xw, with Euclidean distance (x − w)2. We show that EuclidNet is aligned with matrix multiplication and it can be used as a measure of similarity in case of convolutional lay- ers. Furthermore, we show that under various transformations and noise scenarios, EuclidNet exhibits the same performance compared to the deep learning models designed with multiplication operations.
Loading