Internal Representations of Vision Models Through the Lens of Frames on Data Manifolds

Published: 29 Nov 2023, Last Modified: 29 Nov 2023NeurReps 2023 OralEveryoneRevisionsBibTeX
Submission Track: Proceedings
Keywords: Neural representations, manifold frames, data augmentation, adversarial training
TL;DR: We use a construction inspired by the concept of frames on a manifold to study how deep learning models process data
Abstract: While the last five years have seen considerable progress in understanding the internal representations of deep learning models, many questions remain. This is especially true when trying to understand the impact of model design choices, such as model architecture or training algorithm, on hidden representation geometry and dynamics. In this work we present a new approach to studying such representations inspired by the idea of a frame on the tangent bundle of a manifold. Our construction, which we call a *neural frame*, is formed by assembling a set of vectors representing specific types of perturbations of a data point, for example infinitesimal augmentations, noise perturbations, or perturbations produced by a generative model, and studying how these change as they pass through a network. Using neural frames, we make observations about the way that models process, layer-by-layer, specific modes of variation within a small neighborhood of a datapoint. Our results provide new perspectives on a number of phenomena, such as the manner in which training with augmentation produces model invariance or the proposed trade-off between adversarial training and model generalization.
Submission Number: 6
Loading