How to Construct Deep Recurrent Neural NetworksDownload PDF

29 Nov 2024 (modified: 23 Dec 2013)ICLR 2014Readers: Everyone
Decision: submitted, no decision
Abstract: In this paper, we propose a novel way to extend a recurrent neural network (RNN) to a deep RNN. We start by arguing that the concept of the depth in an RNN is not as clear as it is in feedforward neural networks. By carefully analyzing and understanding the architecture of an RNN, we define three points which may be made deeper; (1) input-to-hidden function, (2) hidden-to-hidden transition and (3) hidden-to-output function. This can be considered in addition to stacking multiple recurrent layers proposed earlier by Schmidhuber (1992). Based on this observation, we propose two novel architectures of a deep RNN and provide an alternative interpretation of these deep RNN's using a novel framework based on neural operators. The proposed deep RNN's are empirically evaluated on the tasks of polyphonic music prediction and language modeling. The experimental result supports our claim that the proposed deep RNN's benefit from the depth and outperform the conventional, shallow RNN.
16 Replies

Loading