Structural Pruning of Pre-trained Language Models via Neural Architecture Search

Published: 03 Sept 2024, Last Modified: 17 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Authors that are also TMLR Expert Reviewers: ~Aaron_Klein1
Abstract: Pre-trained language models (PLM), for example BERT or RoBERTa, mark the state-of-the-art for natural language understanding task when fine-tuned on labeled data. However, their large size poses challenges in deploying them for inference in real-world applications, due to significant GPU memory requirements and high inference latency. This paper explores neural architecture search (NAS) for structural pruning to find sub-parts of the fine-tuned network that optimally trade-off efficiency, for example in terms of model size or latency, and generalization performance. We also show how we can utilize more recently developed two-stage weight-sharing NAS approaches in this setting to accelerate the search process. Unlike traditional pruning methods with fixed thresholds, we propose to adopt a multi-objective approach that identifies the Pareto optimal set of sub-networks, allowing for a more flexible and automated compression process.
Certifications: Survey Certification, Expert Certification
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/whittle-org/plm_pruning
Assigned Action Editor: ~Sebastian_U_Stich1
Submission Number: 2620
Loading