Abstract: Training a neural network (NN) through reinforcement learning (RL) has been focused on recently, and a recurrent NN (RNN) is used in learning tasks that require memory. Meanwhile, to cover the shortcomings in learning an RNN, the reservoir network (RN) has been often employed mainly in supervised learning. The RN is a special RNN and has attracted much attention owing to its rich dynamic representations. An approach involving the use of a multi-layer readout (MLR), which comprises a multi-layer NN, was studied for acquiring complex representations using the RN. This study demonstrates that an RN with MLR can learn a “memory task” through RL with back propagation. In addition, non-linear representations required to clear the task are not observed in the RN but are constructed by learning in the MLR. The results suggest that the MLR can make up for the limited computational ability in an RN.
Loading