Relating transformers to models and neural representations of the hippocampal formationDownload PDF

29 Sept 2021, 00:34 (edited 09 May 2022)ICLR 2022 PosterReaders: Everyone
  • Keywords: Neuroscience, representation learning, hippocampus, cortex, transformers
  • Abstract: Many deep neural network architectures loosely based on brain networks have recently been shown to replicate neural firing patterns observed in the brain. One of the most exciting and promising novel architectures, the Transformer neural network, was developed without the brain in mind. In this work, we show that transformers, when equipped with recurrent position encodings, replicate the precisely tuned spatial representations of the hippocampal formation; most notably place and grid cells. Furthermore, we show that this result is no surprise since it is closely related to current hippocampal models from neuroscience. We additionally show the transformer version offers dramatic performance gains over the neuroscience version. This work continues to bind computations of artificial and brain networks, offers a novel understanding of the hippocampal-cortical interaction, and suggests how wider cortical areas may perform complex tasks beyond current neuroscience models such as language comprehension.
  • One-sentence Summary: Transformers learn brain representatations and they are algorithmically related to models of the hippocampal formation.
32 Replies