Keywords: robustness, uncertainty, anomaly detection, calibration, adversarial robustness
TL;DR: While other methods have tradeoffs between various safety measures, mixing fractal and deepdream images improves measures across the board.
Abstract: In real-world applications of machine learning, robust systems must consider measures of performance beyond standard test accuracy. These include out-of-distribution (OOD) robustness, prediction consistency, resilience to adversaries, calibrated uncertainty estimates, and the ability to detect anomalous inputs. However, optimizing for some of these measures often sacrifices performance on others. For instance, adversarial training only improves adversarial robustness and degrades classifier performance. Similarly, strong data augmentation and regularization techniques often improve OOD robustness at the cost of weaker anomaly detection, raising the question of whether a Pareto improvement is possible. We identify a weakness of existing data augmentation techniques---namely, while they inject additional entropy into the training set, the entropy does not contain substantial structural complexity. This leads us to design a new data augmentation strategy utilizing the natural structural complexity of fractals, which outperforms numerous baselines and is the first method to comprehensively improve safety measures.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/pixmix-dreamlike-pictures-comprehensively/code)
1 Reply
Loading