FRanDI: Data-Free Neural Network Compression via Feature Regression and Deep Inversion

Published: 20 Sept 2024, Last Modified: 03 Oct 2024ICOMP PublicationEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neural Network Compression, Zero-Shot, Knowledge Transfer
Abstract: Contemporary post-training neural network compression methods make a model lighter and faster without a significant drop in performance. However, these methods heavily depend on the model's training data which might be unavailable in practical scenarios. In this work, we present *FRanDI*, a novel framework to enable post-training neural networks compression without data. Our method leverages the DeepInversion-based approach to generate synthetic data from the pre-trained model. We propose a compressed network degradation teacher-student based recovery scheme called *Feature Regression*. In addition, we present a new proxy metric that correlates with the original model's target metric to evaluate model compression policies called *Output Discrepancy*. Our algorithm does not depend on the neural network's target task compared to other data-free methods. We evaluate our framework on three different neural network compression approaches: low-rank weight approximation, unstructured pruning, and quantization.
Submission Number: 82
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview