Risk-Averse Predictions on Unseen Domains via Neural Style Smoothing

Published: 20 Jun 2023, Last Modified: 07 Aug 2023AdvML-Frontiers 2023EveryoneRevisionsBibTeX
Keywords: Risk Averse Predictions, Neural Style Smoothing
TL;DR: We propose an inference and a training procedure based on neural style smoothing to obtain risk-averse predictions from any classifier and improve their reliability in risk sensitive settings.
Abstract: Achieving high accuracy on data from domains unseen during training is a fundamental challenge in machine learning. While state-of-the-art neural networks have achieved impressive performance on various tasks, their predictions are biased towards domain-dependent information (ex. image styles) rather than domain-invariant information (ex. image content). This makes them unreliable for deployment in risk-sensitive settings such as autonomous driving. In this work, we propose a novel inference procedure, Test-Time Neural Style Smoothing (TT-NSS), that produces risk-averse predictions using a ``style smoothed'' version of a classifier. Specifically, the style smoothed classifier classifies a test image as the most probable class predicted by the original classifier on random re-stylizations of the test image. TT-NSS uses a neural style transfer module to stylize the test image on the fly, requires black-box access to the classifier, and crucially, abstains when predictions of the original classifier on the stylized images lack consensus. We further propose a neural style smoothing-based training procedure that improves the prediction consistency and the performance of the style-smoothed classifier on non-abstained samples. Our experiments on the PACS dataset and its variations, both in single and multiple domain settings highlight the effectiveness of our methods at producing risk-averse predictions on unseen domains.
Supplementary Material: zip
Submission Number: 61
Loading