Knowledge Translation: A New Pathway for Model Compression

TMLR Paper2044 Authors

11 Jan 2024 (modified: 29 Mar 2024)Withdrawn by AuthorsEveryoneRevisionsBibTeX
Abstract: Deep learning has witnessed significant advancements in recent years at the cost of increasing training, inference, and model storage overhead. While existing model compression methods strive to reduce the number of model parameters while maintaining high accuracy, they inevitably necessitate the re-training of the compressed model or impose architectural constraints. To overcome these limitations, this paper presents a novel framework, termed Knowledge Translation (KT), wherein a “translation” model is trained to receive the parameters of a larger model and generate compressed parameters. The concept of KT draws inspiration from language translation, which effectively employs neural networks to convert different languages, maintaining identical meaning. Accordingly, we explore the potential of neural networks to convert models of disparate sizes, while preserving their functionality. We propose a comprehensive framework for KT, introduce data augmentation strategies to enhance model performance despite limited training data, and successfully demonstrate the feasibility of KT on the MNIST dataset. Code is available at the supplementary material.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Ying_Wei1
Submission Number: 2044
Loading