Training Data Eigenvector Dynamics in the EigenPro Implementation of the Neural Tangent Kernel and Recursive Feature MachinesDownload PDF

16 Mar 2023 (modified: 05 May 2023)Submitted to Tiny Papers @ ICLR 2023Readers: Everyone
Abstract: There has been much recent work on kernel methods as a viable alternative to deep neural networks (DNNs). The advent of the $\textit{Neural Tangent Kernel}$ (NTK) has brought on renewed interest in these methods and their application to typical deep learning tasks. Recently, kernels have been shown to be capable of feature learning similar to that of DNNs, termed $\textit{Recursive Feature Machines}$ (RFMs). In accordance with the growing scale of kernel models, the EigenPro 3 algorithm was proposed to facilitate large-scale training based on preconditioned gradient descent. We propose an accessible framework for observing the eigenvector dynamics of EigenPro's training data in its implementation of these kernel methods, and find empirically that significant change ceases early in training along with apparent bias towards equilibrium. In the case of RFMs, we find that significant change in the training data eigenvectors typically curtails before five iterations, in accordance with findings that RFMs achieve optimal performance in five iterations. This represents a path forward in gaining intuition for the inner workings of large-scale kernel training methods. We provide an easy to use Python implementation of our framework at https://github.com/cgorlla/ep3dynamics.
8 Replies

Loading