Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Distributed Transfer Learning for Deep Convolutional Neural Networks by Basic Probability Assignment
Nov 04, 2016 (modified: Dec 16, 2016)ICLR 2017 conference submissionreaders: everyone
Abstract:Transfer learning is a popular practice in deep neural networks, but fine-tuning of a large number of parameters is a hard challenge due to the complex wiring of neurons between splitting layers and imbalance class distributions of original and transferred domains. Recent advances in evidence theory show that in an imbalance multiclass learning problem, optimizing of proper objective functions based on contingency tables prevents biases towards high-prior classes. Transfer learning usually deals with highly non-convex objectives and local minima in deep neural architectures. We propose a novel distributed transfer learning to tackle both optimization complexity and class-imbalance problem jointly. Our solution imposes separated greedy regularization to each individual convolutional filter to make single-filter neural networks such that the minority classes perform as the majority ones. Then, basic probability assignment from evidence theory boosts these distributed networks to improve the recognition performance on the target domains. Our experiments on several standard datasets confirm the consistent improvement as a result of our distributed transfer learning strategy.
Keywords:Deep learning, Transfer Learning, Supervised Learning, Optimization
Conflicts:anu.edu.au, data61.csiro.au, csiro.au
Enter your feedback below and we'll get back to you as soon as possible.