Learngene Tells You How to Customize: Task-Aware Parameter Prediction at Flexible Scales

27 Sept 2024 (modified: 23 Jan 2025)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: parameter prediction; learngene
Abstract: Reducing serving costs and latency is a fundamental challenge for deploying large-scale models in business applications. To cope with this demand, the Learngene framework encapsulates shareable information from large models into a compact unit called a learngene. This unit serves to initialize downstream models, enabling them to inherit the knowledge from the large model efficiently, hopefully diminishing deployment expenses. However, existing learngene methods are constrained by their strong dependence on the architecture of large model and overlook the features of target tasks, resulting in suboptimal adaptability of downstream models to deployment requirements. In this paper, we present Task-Aware Learngene (TAL), a novel method based on graph hypernetworks that predicts model parameters conditioned on desired model scales and task-specific characteristics. Extensive experiments demonstrate that TAL effectively scales model initialization parameters, selectively utilizes shareable information pertinent to target tasks, and consistently outperforms random initialization and existing parameter prediction methods. Furthermore, TAL exhibits promising transfer learning capabilities for unseen tasks, underscoring its effectiveness in condensing large model knowledge while being aware of downstream requirements.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8772
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview