LEAP: Learning Embeddings for Adaptive Pace

Anonymous

Nov 07, 2017 (modified: Nov 07, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: The parameterization of mini-batches for training Deep Neural Networks (DNN) is a non-trivial problem. In this paper, we propose a Self-Paced Learning (SPL)-fused Deep Metric Learning (DML) framework, which we call Learning Embeddings for Adaptive Pace (LEAP). Our method parameterizes mini-batches dynamically based on the easiness and true diverseness of the sample within a salient feature representation space. In LEAP, we train an embedding Convolutional Neural Network (CNN) to learn an expressive representation space by adaptive density discrimination using the Magnet Loss. The student CNN classifier dynamically selects samples to form a mini-batch based on the easiness from cross-entropy losses and true diverseness of examples from the representation space sculpted by the embedding CNN. We evaluate LEAP using deep CNN architectures for the task of supervised image classification on MNIST, FashionMNIST and CIFAR-10. We show that the LEAP framework can achieve a higher convergence w.r.t. the number of mini-batch updates required to achieve a comparable or better test performance on each of the datasets. Our framework is implemented in PyTorch and will be released as open-source on GitHub following review.
  • TL;DR: LEAP combines the strength of adaptive sampling with that of mini-batch online learning and adaptive representation learning to formulate a representative self-paced strategy in an end-to-end DNN training protocol.
  • Keywords: deep metric learning, self-paced learning, representation learning, cnn

Loading