Affine feature response transforms

24 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Desk Rejected SubmissionEveryoneRevisionsBibTeX
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: neural encoding, receptive fields, multi-unit activity, visual cortex
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: We introduce affine feature response transforms (AFRT, \’e-fert \), - a new family of neural encoding models based on spatial transformer networks (STNs). AFRT factorises receptive fields into a sequential affine component with 3 interpretable parameters (Θ for shifting and scaling) and response components with a small number of feature weights per response, which drastically simplifies the state-of-the-art neural encoding models and significantly improves performance of the encoding model. Additionally, our investigation delves into the sizes of the AFRT-generated receptive fields at various depth levels within the neural network. Our findings reveal a correspondence between these sizes and the information complexity present in corresponding brain regions, offering valuable insights into the behavior of the model, which demonstrate the benefits of incorporating spatial specificity into neural encoding models and potentially offers a new avenue for retinotopic mapping.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9295
Loading