On 1/n neural representation and robustness
Abstract: Understanding the nature of representation in neural networks is a goal shared by
neuroscience and machine learning. It is therefore exciting that both fields converge
not only on shared questions but also on similar approaches. A pressing question
in these areas is understanding how the structure of the representation used by
neural networks affects both their generalization, and robustness to perturbations.
In this work, we investigate the latter by juxtaposing experimental results regarding
the covariance spectrum of neural representations in the mouse V1 (Stringer et al)
with artificial neural networks. We use adversarial robustness to probe Stringer
et al’s theory regarding the causal role of a 1/n covariance spectrum. We empirically investigate the benefits such a neural code confers in neural networks, and
illuminate its role in multi-layer architectures. Our results show that imposing
the experimentally observed structure on artificial neural networks makes them
more robust to adversarial attacks. Moreover, our findings complement the existing
theory relating wide neural networks to kernel methods, by showing the role of
intermediate representations.
0 Replies
Loading