Investigating Grokking phenomena below the Critical Data Regime

27 Sept 2024 (modified: 02 Dec 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Grokking
TL;DR: Understanding the behavior of grokking phenomena when data becomes less than critical data.
Abstract: In this paper, we explore the practical utility of grokking, a phenomenon where models generalize long after overfitting the training data. This offers a promising avenue for training on changing distributions, especially in data-scarce environ- ments. We investigate a scenario where a model grokked on a distribution p1 is utilized to grok another model on a different distribution p2, particularly in a data crunch situation on the p2 distribution. We further explore distilling multiple small models grokked on different distributions to generalize a larger model. This ap- proach is crucial where data is scarcely available for these different distributions, thus saving computational resources. Finally, we present a setup for continually pretraining a grokked model from distribution p1 to p2. Our experiments reveal that distilling from a grokked model provides quick generalization over the cur- rent task while simultaneously alleviating the forgetting of previous knowledge. We analyze these scenarios over various algorithmic tasks such as addition, sub- traction, and multiplication. Our results provide a framework for efficient model training in dynamic and data-limited scenarios, enabling the development of more robust, adaptable systems.
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 12306
Loading