Abstract: Effective training of neural networks requires much data. In the low-data regime,
parameters are underdetermined, and learnt networks generalise poorly. Data
Augmentation (Krizhevsky et al., 2012) alleviates this by using existing data
more effectively. However standard data augmentation produces only limited
plausible alternative data. Given there is potential to generate a much broader set
of augmentations, we design and train a generative model to do data augmentation.
The model, based on image conditional Generative Adversarial Networks, takes
data from a source domain and learns to take any data item and generalise it
to generate other within-class data items. As this generative process does not
depend on the classes themselves, it can be applied to novel unseen classes of data.
We show that a Data Augmentation Generative Adversarial Network (DAGAN)
augments standard vanilla classifiers well. We also show a DAGAN can enhance
few-shot learning systems such as Matching Networks. We demonstrate these
approaches on Omniglot, on EMNIST having learnt the DAGAN on Omniglot, and
VGG-Face data. In our experiments we can see over 13% increase in accuracy in
the low-data regime experiments in Omniglot (from 69% to 82%), EMNIST (73.9%
to 76%) and VGG-Face (4.5% to 12%); in Matching Networks for Omniglot we
observe an increase of 0.5% (from 96.9% to 97.4%) and an increase of 1.8% in
EMNIST (from 59.5% to 61.3%).
TL;DR: Conditional GANs trained to generate data augmented samples of their conditional inputs used to enhance vanilla classification and one shot learning systems such as matching networks and pixel distance
Code: [![Papers with Code](/images/pwc_icon.svg) 7 community implementations](https://paperswithcode.com/paper/?openreview=S1Auv-WRZ)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 6 code implementations](https://www.catalyzex.com/paper/data-augmentation-generative-adversarial/code)
7 Replies
Loading