Grokking Beyond Neural Networks: An Empirical Exploration with Model Complexity

Published: 19 Mar 2024, Last Modified: 19 Mar 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: In some settings neural networks exhibit a phenomenon known as \textit{grokking}, where they achieve perfect or near-perfect accuracy on the validation set long after the same performance has been achieved on the training set. In this paper, we discover that grokking is not limited to neural networks but occurs in other settings such as Gaussian process (GP) classification, GP regression, linear regression and Bayesian neural networks. We also uncover a mechanism by which to induce grokking on algorithmic datasets via the addition of dimensions containing spurious information. The presence of the phenomenon in non-neural architectures shows that grokking is not restricted to settings considered in current theoretical and empirical studies. Instead, grokking may be possible in any model where solution search is guided by complexity and error.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Added a subsection in the discussion concerning the real-world applications of our work, an acknowledgements section and a supplementary material section.
Video: https://www.youtube.com/watch?v=--RAHz68f3c
Code: https://github.com/jackmiller2003/tiny-gen
Assigned Action Editor: ~Anastasios_Kyrillidis2
Submission Number: 1713
Loading