SliceFine: The Universal Winning-Slice Hypothesis for Pretrained Networks

ICLR 2026 Conference Submission20554 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: PEFT, subnetwork/slice selection, spectral balance, universal winning slice, lottery-ticket, fine-tuning, transfer learning, LLMs, vision transformers.
TL;DR: Pretrained models contain “universal winning slices”; tuning tiny random weight slices with SliceFine—adding no new parameters—matches SOTA across language and vision.
Abstract: This paper presents a theoretical framework that explains why fine-tuning small, randomly selected subnetworks (slices) within pre-trained models is sufficient for downstream adaptation. We prove that pretrained networks exhibit a universal winning slice property, arising from two phenomena: (1) spectral balance— the eigenspectra of different weight matrix slices are remarkably similar—and (2) high task energy—their backbone representations (pretrained weights) retain rich, task-relevant features. This leads to the Universal Winning Slice Hypothesis, which provides a theoretical foundation for parameter-efficient fine-tuning (PEFT) in large-scale models. Inspired by this, we propose SliceFine, a PEFT method that uses this inherent redundancy by updating only selected slices of the origi- nal weights—introducing zero new parameters, unlike adapter-based approaches. Empirically, SliceFine matches the performance of SOTA PEFT methods across various language and vision tasks, while significantly improving training speed, memory efficiency, and model compactness. Our work bridges theory and prac- tice, offering a theoretically grounded alternative to existing PEFT techniques.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 20554
Loading