Keywords: Reinforcement Learning Quantization Formats Floating-Point NAS Neural Architecture Search
Abstract: Quantization has become a mainstream compression technique for reducing model size, computational requirements, and energy consumption for modern deep neural networks (DNNs).
With improved numerical support in recent hardware, including multiple variants of integer and floating point, mixed-precision quantization has become necessary to achieve high-quality results with low model cost.
Prior mixed-precision methods have performed either a post-training quantization search, which compromises on accuracy, or a differentiable quantization search, which leads to high memory usage from branching.
Therefore, we propose the first one-shot mixed-precision quantization search that eliminates the need for retraining in both integer and low-precision floating point models.
We evaluate our search (FLIQS) on multiple convolutional and vision transformer networks to discover Pareto-optimal models.
Our approach improves upon uniform precision, manual mixed-precision, and recent integer quantization search methods.
With integer models, we increase the accuracy of ResNet-18 on ImageNet by 1.31\% points and ResNet-50 by 0.90\% points with equivalent model cost over previous methods.
Additionally, for the first time, we explore a novel mixed-precision floating-point search and improve MobileNetV2 by up to 0.98\% points compared to prior state-of-the-art FP8 models.
Finally, we extend FLIQS to simultaneously search a joint quantization and neural architecture space and improve the ImageNet accuracy by 2.69\% points with similar model cost on a MobileNetV2 search space.
Submission Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Code And Dataset Supplement: zip
Optional Meta-Data For Green-AutoML: This blue field is just for structuring purposes and cannot be filled.
Evaluation Metrics: No
Submission Number: 32
Loading