Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences

Sainbayar Sukhbaatar, Takaki Makino, Kazuyuki Aihara

Jan 16, 2013 (modified: Jan 16, 2013) ICLR 2013 conference submission readers: everyone
  • Decision: conferencePoster-iclr2013-workshop
  • Abstract: Learning invariant representations from images is one of the hardest challenges facing computer vision. Spatial pooling is widely used to create invariance to spatial shifting, but it is restricted to convolutional models. In this paper, we propose a novel pooling method that can learn soft clustering of features from image sequences. It is trained to improve the temporal coherence of features, while keeping the information loss at minimum. Our method does not use spatial information, so it can be used with non-convolutional models too. Experiments on images extracted from natural videos showed that our method can cluster similar features together. When trained by convolutional features, auto-pooling outperformed traditional spatial pooling on an image classification task, even though it does not use the spatial topology of features.