On the Consistency Loss for Leveraging Augmented Data to Learn Robust and Invariant RepresentationsDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: robustness, invariance, data augmentation, consistency loss
Abstract: Data augmentation is one of the most popular techniques for improving the robustness of neural networks. In addition to directly training the model with original samples and augmented samples, a torrent of methods regularizing the distance between embeddings/representations of the original samples and their augmented counterparts have been introduced. In this paper, we explore these various regularization choices, seeking to provide a general understanding of how we should regularize the embeddings. Our analysis suggests how the ideal choices of regularization correspond to various assumptions. With an invariance test, we show that regularization is important if the model is to be used in a broader context than the in-lab setting because non-regularized approaches are limited in learning the concept of invariance, despite equally high accuracy. Finally, we also show that the generic approach we identified (squared $\ell_2$ norm regularized augmentation) performs better than several recent methods, which are each specially designed for one task and significantly more complicated than ours, over three different tasks.
One-sentence Summary: We show that consistency loss when using data augmentation is important to learn robust and invariant representations, and show that squared $\ell_2$ norm regularization is the best candidate
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Reviewed Version (pdf): https://openreview.net/references/pdf?id=zaB_iJqnIM
9 Replies

Loading