Task-Relevant Covariance from Manifold Capacity Theory Improves Robustness in Deep Networks

Published: 10 Oct 2024, Last Modified: 03 Nov 2024UniRepsEveryoneRevisionsBibTeXCC BY 4.0
Supplementary Material: pdf
Track: Extended Abstract Track
Keywords: Neural manifolds, Representational geometry, Domain adaptation, Out-of-distribution generalization, Deep learning
Abstract: Analysis of high-dimensional representations in neuroscience and deep learning traditionally places equal importance on all points in a representation, potentially leading to significant information loss. Recent advances in manifold capacity theory offer a principled framework for identifying the computationally relevant points on neural manifolds. In this work, we introduce the concept of *task-relevant class covariance* to identify directions in representation-space supporting class discriminability. We demonstrate that scaling representations along these directions markedly improves simulated accuracy under distribution shift. Building on these insights, we propose AnchorBlocks, architectural modules that use task-relevant class covariance to align representations with a task-relevant eigenspace. By appending one AnchorBlock onto ResNet18, we achieve competitive performance in a standard domain adaptation benchmark (CIFAR-10C) against much larger robustness-promoting architectures. Our findings provide insight into neural population geometry and methods to interpret/build robust deep learning systems.
Submission Number: 50
Loading