Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Learning to encode motion using spatio-temporal synchrony
Kishore Reddy Konda, Roland Memisevic, Vincent Michalski
Dec 18, 2013 (modified: Dec 18, 2013)ICLR 2014 conference submissionreaders: everyone
Decision:submitted, no decision
Abstract:We consider the task of learning to extract motion from videos. To this end, we show that the detection of spatial transformations can be viewed as the detection of synchrony between the image sequence and a sequence of features undergoing the motion we wish to detect. We show that learning about synchrony is possible using very fast, local learning rules, by introducing multiplicative 'gating' interactions between hidden units across frames. This makes it possible to achieve competitive performance in a wide variety of motion estimation tasks, using a small fraction of the time required to learn features, and to outperform hand-crafted spatio-temporal features by a large margin. We also show how learning about synchrony can be viewed as performing greedy parameter estimation in the well-known motion energy model.
Enter your feedback below and we'll get back to you as soon as possible.