Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Twin Networks: Matching the Future for Sequence Generation
Nov 07, 2017 (modified: Nov 07, 2017)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:We propose a simple technique for encouraging generative RNNs to plan ahead.
We train a ``backward'' recurrent network to generate a given sequence in reverse order, and we encourage states of the forward model to predict cotemporal states of the backward model.
The backward network is used only during training, and plays no role during sampling or inference.
We hypothesize that our approach eases modeling of long-term dependencies by implicitly forcing the forward states to hold information about the longer-term future (as contained in the backward states).
We show empirically that our approach achieves 9% relative improvement for a speech recognition task, and achieves 0.8 CIDEr points improvement on a COCO caption generation task.
TL;DR:The paper introduces a method of training generative recurrent networks that helps to plan ahead. We run a second RNN in a reverse direction and make a soft constraint between cotemporal forward and backward states.
Keywords:generative rnns, long term dependencies, speech recognition, image captioning
Enter your feedback below and we'll get back to you as soon as possible.