TL;DR: We propose multi-objective weight-sharing based neural-architecture search to prune large language models when fine-tuned on downstream tasks.
Keywords: neural architecture search, large language models
Optional Submission Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Cpu Hours: 0
Gpu Hours: 0
Tpu Hours: 0
Submission Number: 5
Loading