Abstract: One main concern of the deep learning community is to increase the capacity of
representation of deep networks by increasing their depth. This requires to scale
up the size of the training database accordingly. Indeed a major intuition lies in
the fact that the depth of the network and the size of the training set are strongly
correlated. However recent works tend to show that deep learning may be handled
with smaller dataset as long as the training samples are carefully selected (let us
mention for instance curriculum learning). In this context we introduce a scalable
and efficient active learning method that can be applied to most neural networks,
especially Convolutional Neural Networks (CNN). To the best of our knowledge,
this paper is the first of its kind to design an active learning selection scheme based
on a variational inference for neural networks. We also deduced a formulation of
the posterior and prior distributions of the weights using statistical knowledge on
the Maximum Likelihood Estimator.
We describe our strategy to come up with our active learning criterion. We assess its
consistency by checking the accuracy obtained by successive active learning steps
on two benchmark datasets MNIST and USPS. We also demonstrate its scalability
towards increasing training set size.
TL;DR: Building automatically the labeled training set with active learning for CNN. The criterion is developed on a variational inference for NN and a kronecker approximation of Fisher matrices for CNN
Conflicts: unice.fr
Keywords: Deep learning, Supervised Learning, Optimization
20 Replies
Loading