Generalization of Learning using Reservoir Computing


Nov 07, 2017 (modified: Nov 07, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: We investigate the methods by which a Reservoir Computing Network (RCN), trained to classify pairs of images as 'similar', 'transformed' or 'different', learns the relationships between the images and generalizes these concepts to previously unseen types of data. One of our motivations is to explore how biologically realistic features, like recurrent relationships between neuron-like units and resemblance to neural dynamics (key features of RCNs), contribute to the learning capabilities of Artificial Neural Networks (ANNs). Specifically, we show that an RCN trained to identify strong similarities or transformations between image pairs drawn from a subset of digits from the MNIST database generalizes the learned transformations to images of digits unseen during training. We observe that the high dimensional reservoir states generated from an input image pair with a specific transformation converge over time to a unique relationship. Thus, as opposed to training the entire high dimensional reservoir state, the reservoir only needs to train on these unique relationships, allowing the reservoir to perform well with very few training examples. The directions of the principal component of the time-projected reservoir states representing input image pairs with the same relationship are aligned closer together than those representing image pairs with different relationships. Thus, generalization of learning to unseen images is interpretable in terms of clustering of the reservoir state onto the attractor corresponding to the transformation in reservoir space. We find that RCNs can identify and generalize linear and non-linear transformations, and combinations of transformations, naturally and be a robust and effective image classifier. Additionally, RCNs perform significantly better than state of the art neural network classification techniques such as deep siamese neural networks in generalization tasks both on the MNIST dataset and more complex depth maps of visual scenes from a moving camera. This helps bridge the gap between machine learning and the biological problem of human cognition, and points to new directions in the investigation of learning processes.
  • TL;DR: Generalization of the relationships learnt between pairs of images using a small training data to previously unseen types of images using a biologically plausible model, Reservoir Computing.
  • Keywords: Generalization, Reservoir Computing, dynamical system, Siamese Neural Network, image classification, similarity, dimensionality reduction