Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Learning Paired-associate Images with An Unsupervised Deep Learning Architecture
Ti Wang, Daniel L. Silver
Dec 25, 2013 (modified: Dec 25, 2013)ICLR 2014 conference submissionreaders: everyone
Decision:submitted, no decision
Abstract:This paper presents an unsupervised multi-modal learning system that learns associative representation from two input modalities (channels) such that input on one channel will correctly generate the associated response at the other channel and vice versa. In this way, the system develops a kind of supervised classification model meant to simulate aspects of human associative memory. The system uses a deep learning architecture (DLA) composed of two input/output channels formed from stacked Restricted Boltzmann Machines (RBM) and an associative memory network that combines the two channels. The DLA is trained on pairs of MNIST handwritten digit images to develop hierarchical features and associative representations that are able to reconstruct one image given its paired-associate. Experiments show that the multi-modal learning system generates models that are as accurate as back-propagation networks but with the advantage of unsupervised learning from either paired or non-paired training examples.
Enter your feedback below and we'll get back to you as soon as possible.