Training Neural Machines with Partial TracesDownload PDF

15 Feb 2018 (modified: 15 Feb 2018)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: We present a novel approach for training neural abstract architectures which in- corporates (partial) supervision over the machine’s interpretable components. To cleanly capture the set of neural architectures to which our method applies, we introduce the concept of a differential neural computational machine (∂NCM) and show that several existing architectures (e.g., NTMs, NRAMs) can be instantiated as a ∂NCM and can thus benefit from any amount of additional supervision over their interpretable components. Based on our method, we performed a detailed experimental evaluation with both, the NTM and NRAM architectures, and showed that the approach leads to significantly better convergence and generalization capabilities of the learning phase than when training using only input-output examples.
TL;DR: We increase the amount of trace supervision possible to utilize when training fully differentiable neural machine architectures.
Keywords: Neural Abstract Machines, Neural Turing Machines, Neural Random Access Machines, Program Synthesis, Program Induction
9 Replies

Loading