Keywords: Deep Learning, Language Models, Interpretability, Training Data Attribution, Loss Landscape
TL;DR: We introduce new gradient sketching algorithms as a toolkit for model analysis.
Abstract: The study of modern machine learning models often necessitates storing vast quantities of gradients or Hessian vector products (HVPs). Traditional sketching methods struggle to scale under these memory constraints. We present a novel framework for scalable gradient and HVP sketching, tailored for modern hardware. We provide theoretical guarantees and demonstrate the power of our methods in applications like training data attribution, Hessian spectrum analysis, and intrinsic dimension computation for pre-trained language models. Our work sheds new light on the behavior of pre-trained language models, challenging assumptions about their intrinsic dimensionality and Hessian properties.
Primary Area: Interpretability and explainability
Submission Number: 1229
Loading