Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Junk DNA Hypothesis, low-magnitude weights, large-scale language models
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We propose Junk DNA Hypothesis in LLMs, where massive portion of small-magnitude weights in deep networks are ``non-functional" for simple tasks but will be activated to carry essential roles in harder tasks.
Abstract: The traditional notion of "Junk DNA" has long been linked to non-coding segments within the human genome, constituting roughly 98\% of its composition. Initially perceived as biologically inert, recent research has unveiled the critical roles some of these seemingly non-functional DNA sequences play in cellular processes. Intriguingly, the weights within deep neural networks exhibit a remarkable similarity to the redundancy observed in human genes. It was believed that weights in gigantic models contained excessive redundancy, leading to the conception that a significant number of parameters could be removed without compromising performance. This paper challenges this conventional wisdom by presenting a compelling **counter-argument**. We employ sparsity (specifically weight pruning) as a tool to isolate and quantify the nuanced significance of low-magnitude weights in pre-trained large language models (LLMs). Our study demonstrates a strong correlation between these weight magnitudes and the knowledge they encapsulate for downstream tasks. Drawing parallels with biological insights, we raise the "**Junk DNA Hypothesis**" backed by our in-depth investigation: while small-magnitude weights may appear nearly "useless" for simple tasks and thus suitable for pruning, they actually encode crucial knowledge necessary for solving more difficult down stream tasks. Removing these seemingly insignificant weights can lead to \underline{irreversible} knowledge forgetting and performance damage in difficult tasks. To study it formally, we introduce several quantifiable metrics for gauging **downstream task difficulty**: (i) within the same task category, we vary the adequacy of target domain data (e.g., few-shot fine-tuning) and extend this to multi-domain learning (e.g., majority versus minority language in multilingual translation). Additionally, we assess the availability of external information (e.g., open-book versus close-book QA); (ii) across diverse task categories, we utilize the normalized performance gap between humans and models as an indicator of LLM-facing task complexity. Our extensive experiments validate the Junk DNA Hypothesis across a spectrum of model scales, tasks, and datasets, employing both forms of sparsity - unstructured and structured (N:M). We also empirically confirm that the essential knowledge indeed resides within the pre-trained weights, and the performance drop does not stem from constrained model capacity post-pruning. These findings offer fresh insights into how LLMs encode knowledge in a task-sensitive manner, present challenges for future research in model pruning, and open avenues for task-aware conditional computation during inference. Codes will be released.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6083
Loading