The traditional notion of "Junk DNA" has long been linked to non-coding segments within the human genome, constituting roughly 98% of its composition. Initially perceived as biologically inert, recent research has unveiled the critical roles some of these seemingly non-functional DNA sequences play in cellular processes. Intriguingly, the weights within deep neural networks exhibit a remarkable similarity to the redundancy observed in human genes. It was believed that weights in gigantic models contained excessive redundancy, leading to the conception that a significant number of parameters could be removed without compromising performance.
This paper challenges this conventional wisdom by presenting a compelling counter-argument. We employ sparsity (specifically weight pruning) as a tool to isolate and quantify the nuanced significance of low-magnitude weights in pre-trained large language models (LLMs). Our study demonstrates a strong correlation between these weight magnitudes and the knowledge they encapsulate for downstream tasks. Drawing parallels with biological insights, we raise the "Junk DNA Hypothesis" backed by our in-depth investigation: while small-magnitude weights may appear nearly "useless" for simple tasks and thus suitable for pruning, they actually encode crucial knowledge necessary for solving more difficult down stream tasks. Removing these seemingly insignificant weights can lead to \underline{irreversible} knowledge forgetting and performance damage in difficult tasks.
To study it formally, we introduce several quantifiable metrics for gauging downstream task difficulty: (i) within the same task category, we vary the adequacy of target domain data (e.g., few-shot fine-tuning) and extend this to multi-domain learning (e.g., majority versus minority language in multilingual translation). Additionally, we assess the availability of external information (e.g., open-book versus close-book QA); (ii) across diverse task categories, we utilize the normalized performance gap between humans and models as an indicator of LLM-facing task complexity. Our extensive experiments validate the Junk DNA Hypothesis across a spectrum of model scales, tasks, and datasets, employing both forms of sparsity - unstructured and structured (N:M). We also empirically confirm that the essential knowledge indeed resides within the pre-trained weights, and the performance drop does not stem from constrained model capacity post-pruning. These findings offer fresh insights into how LLMs encode knowledge in a task-sensitive manner, present challenges for future research in model pruning, and open avenues for task-aware conditional computation during inference. Codes will be released.