Keywords: Entropy, Robustness, Auxiliary Batch Normalization
TL;DR: To improve accuracy and robustness against OOD domains, we propose a novel entropy-based disentangled learning method via auxiliary batch normalization layer.
Abstract: Deep neural networks (DNNs) struggle to generalize to out-of-distribution domains that are different from those in training despite their impressive performance.
In practical applications, it is important for DNNs to have both high standard accuracy and robustness against out-of-distribution domains.
One technique that achieves both of these improvements is disentangled learning with mixture distribution via auxiliary batch normalization layers (ABNs).
This technique treats clean and transformed samples as different domains, allowing a DNN to learn better features from mixed domains.
However, if we distinguish the domains of the samples based on entropy, we find that some transformed samples are drawn from the same domain as clean samples, and these samples are not completely different domains.
To generate samples drawn from a completely different domain than clean samples, we hypothesize that transforming clean high-entropy samples to further increase the entropy generates out-of-distribution samples that are much further away from the in-distribution domain.
On the basis of the hypothesis, we propose high entropy propagation~(EntProp), which feeds high-entropy samples to the network that uses ABNs.
We introduce two techniques, data augmentation and free adversarial training, that increase entropy and bring the sample further away from the in-distribution domain.
These techniques do not require additional training costs.
Our experimental results show that EntProp achieves higher standard accuracy and robustness with a lower training cost than the baseline methods.
In particular, EntProp is highly effective at training on small datasets.
List Of Authors: Enomoto, Shohei
Latex Source Code: zip
Signed License Agreement: pdf
Submission Number: 327
Loading