Lie-Access Neural Turing MachinesDownload PDF

Published: 21 Jul 2022, Last Modified: 22 Oct 2023ICLR 2017 PosterReaders: Everyone
Abstract: External neural memory structures have recently become a popular tool for algorithmic deep learning (Graves et al. 2014; Weston et al. 2014). These models generally utilize differentiable versions of traditional discrete memory-access structures (random access, stacks, tapes) to provide the storage necessary for computational tasks. In this work, we argue that these neural memory systems lack specific structure important for relative indexing, and propose an alternative model, Lie-access memory, that is explicitly designed for the neural setting. In this paradigm, memory is accessed using a continuous head in a key-space manifold. The head is moved via Lie group actions, such as shifts or rotations, generated by a controller, and memory access is performed by linear smoothing in key space. We argue that Lie groups provide a natural generalization of discrete memory structures, such as Turing machines, as they provide inverse and identity operators while maintaining differentiability. To experiment with this approach, we implement a simplified Lie-access neural Turing machine (LANTM) with different Lie groups. We find that this approach is able to perform well on a range of algorithmic tasks.
TL;DR: We generalize Turing machines to the continuous setting using Lie group actions on manifolds.
Conflicts: harvard.edu
Keywords: Natural language processing, Deep learning, Supervised Learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1611.02854/code)
19 Replies

Loading