Category-orthogonal object features guide information processing in recurrent neural networks trained for object categorization
Keywords: inductive bias, recurrent processing, explainability, emergent processes
TL;DR: In RNNs, category-orthogonal information (location, scale, etc. of the object) is conveyed through recurrent connectivity and is used to optimise category judgements in cluttered environments.
Abstract: Recurrent neural networks (RNNs) have been shown to perform better than feedforward architectures in visual object categorization tasks, especially in challenging conditions such as cluttered images. However, little is known about the exact computational role of recurrent information flow in these conditions. Here we test RNNs trained for object categorization on the hypothesis that recurrence iteratively aids object categorization via the communication of category-orthogonal auxiliary variables (the location, orientation, and scale of the object). Using diagnostic linear readouts, we find that: (a) information about auxiliary variables increases across time in all network layers, (b) this information is indeed present in the recurrent information flow, and (c) its manipulation significantly affects task performance. These observations confirm the hypothesis that category-orthogonal auxiliary variable information is conveyed through recurrent connectivity and is used to optimize category inference in cluttered environments.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/category-orthogonal-object-features-guide/code)
7 Replies
Loading