Category-orthogonal object features guide information processing in recurrent neural networks trained for object categorizationDownload PDF

Published: 02 Nov 2021, Last Modified: 05 May 2023SVRHM 2021 PosterReaders: Everyone
Keywords: inductive bias, recurrent processing, explainability, emergent processes
TL;DR: In RNNs, category-orthogonal information (location, scale, etc. of the object) is conveyed through recurrent connectivity and is used to optimise category judgements in cluttered environments.
Abstract: Recurrent neural networks (RNNs) have been shown to perform better than feedforward architectures in visual object categorization tasks, especially in challenging conditions such as cluttered images. However, little is known about the exact computational role of recurrent information flow in these conditions. Here we test RNNs trained for object categorization on the hypothesis that recurrence iteratively aids object categorization via the communication of category-orthogonal auxiliary variables (the location, orientation, and scale of the object). Using diagnostic linear readouts, we find that: (a) information about auxiliary variables increases across time in all network layers, (b) this information is indeed present in the recurrent information flow, and (c) its manipulation significantly affects task performance. These observations confirm the hypothesis that category-orthogonal auxiliary variable information is conveyed through recurrent connectivity and is used to optimize category inference in cluttered environments.
7 Replies

Loading