Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Lie-Access Neural Turing Machines
Greg Yang, Alexander Rush
Nov 04, 2016 (modified: Mar 03, 2017)ICLR 2017 conference submissionreaders: everyone
External neural memory structures have recently become a popular tool for
algorithmic deep learning
(Graves et al. 2014; Weston et al. 2014). These models
generally utilize differentiable versions of traditional discrete
memory-access structures (random access, stacks, tapes) to provide
the storage necessary for computational tasks. In
this work, we argue that these neural memory systems lack specific
structure important for relative indexing, and propose an
alternative model, Lie-access memory, that is explicitly designed
for the neural setting. In this paradigm, memory is accessed using
a continuous head in a key-space manifold. The head is moved via Lie
group actions, such as shifts or rotations, generated by a
controller, and memory access is performed by linear smoothing in
key space. We argue that Lie groups provide a natural generalization
of discrete memory structures, such as Turing machines, as they
provide inverse and identity operators while maintaining
differentiability. To experiment with this approach, we implement
a simplified Lie-access neural Turing machine (LANTM) with
different Lie groups. We find that this approach is able to perform
well on a range of algorithmic tasks.
TL;DR:We generalize Turing machines to the continuous setting using Lie group actions on manifolds.
Keywords:Natural language processing, Deep learning, Supervised Learning
Enter your feedback below and we'll get back to you as soon as possible.