TL;DR: We propose a general and loss-driven \textbf{L}oss\textbf{L}ess \textbf{C}ompression theoretical framework (\textbf{LLC}).
Abstract: This work focuses on general and loss-value-driven lossless model compression, ensuring that the model's loss value remains unchanged or decreases after compression.
A key challenge is effectively leveraging compression errors and defining the boundaries for lossless compression to minimize model loss. i.e., compression for better. Currently, there is no systematic approach to determining this error boundary or understanding its specific impact on model performance.
We propose a general and loss-driven \textbf{L}oss\textbf{L}ess \textbf{C}ompression theoretical framework (\textbf{LLC}), which further delineates the compression neighborhood and higher-order analysis boundaries through the total differential, thereby specifying the error range within which a model can be compressed without loss.
To verify the effectiveness of LLC, we apply various compression techniques, including quantization and decomposition. Specifically, for quantization, we reformulate the classic quantization search problem as a grouped knapsack problem within the lossless neighborhood, achieving lossless quantization while improving computational efficiency.
For decomposition, LLC addresses the approximation problem under low-rank constraints, automatically determining the rank for each layer and producing lossless low-rank models.
We conduct extensive experiments on multiple neural network architectures on different datasets. The results show that without fancy tricks, LLC can effectively achieve lossless model compression. Our code will be made publicly.
Primary Area: Applications->Everything Else
Keywords: Loss-Driven, Compression
Submission Number: 14199
Loading