Learning Augmentation Distributions using Transformed Risk Minimization

Published: 04 Aug 2023, Last Modified: 04 Aug 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: We propose a new \emph{Transformed Risk Minimization} (TRM) framework as an extension of classical risk minimization. In TRM, we optimize not only over predictive models, but also over data transformations; specifically over distributions thereof. As a key application, we focus on learning augmentations; for instance appropriate rotations of images, to improve classification performance with a given class of predictors. Our TRM method (1) jointly learns transformations and models in a \emph{single training loop}, (2) works with any training algorithm applicable to standard risk minimization, and (3) handles any transforms, such as discrete and continuous classes of augmentations. To avoid overfitting when implementing empirical transformed risk minimization, we propose a novel regularizer based on PAC-Bayes theory. For learning augmentations of images, we propose a new parametrization of the space of augmentations via a stochastic composition of blocks of geometric transforms. This leads to the new \emph{Stochastic Compositional Augmentation Learning} (SCALE) algorithm. The performance of TRM with SCALE compares favorably to prior methods on CIFAR10/100. Additionally, we show empirically that SCALE can correctly learn certain symmetries in the data distribution (recovering rotations on rotated MNIST) and can also improve calibration of the learned model.
Submission Length: Long submission (more than 12 pages of main content)
Supplementary Material: zip
Assigned Action Editor: ~Andriy_Mnih1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 908
Loading