Transfer Learning on Manifolds via Learned Transport Operators

Anonymous

Nov 07, 2017 (modified: Nov 07, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Within-class variation in a high-dimensional dataset can be modeled as being on a low-dimensional manifold due to the constraints of the physical processes producing that variation (e.g., translation, illumination, etc.). We desire a method for learning a representation of these manifolds that can be used to increase robustness and reduce the training burden in machine learning tasks. In particular, what is needed is a representation of the transformation manifold that can robustly capture the shape of the manifold from the input data, generate new points on the manifold, and extend transformations outside of the training domain without significantly increasing the error. Previous work has proposed algorithms to efficiently learn analytic operators (called \emph{transport operators}) that define the process of transporting one data point on a manifold to another. The main contribution of this paper is to show that the manifold representation learned in the transport operators is valuable both as a probabilistic model to improve general machine learning tasks as well as for performing transfer learning in classification tasks. Specifically, we demonstrate that manifold transport operators can learn a robust representation of manifold structure. We then use this representation in a novel randomized approach to transfer learning that employs the learned generative model to map out unseen regions of the data space. These results are shown on stylized constructions using the classic swiss roll data structure for ease of visualization, as well as demonstrations of transfer learning in a data augmentation task for few-shot image classification.
  • TL;DR: Learning transport operators on manifolds forms a valuable representation for doing tasks like transfer learning.
  • Keywords: manifold learning, transfer learning

Loading