Pruning Pre-trained Language Models Without Fine-Tuning.

Ting Jiang, Deqing Wang 0001, Fuzhen Zhuang, Ruobing Xie, Feng Xia 0006

15 Jan 2026CoRR 2022EveryoneCC BY-SA 4.0
Loading