Diffusion-TTA: Test-time Adaptation of Discriminative Models via Generative Feedback

Published: 21 Sept 2023, Last Modified: 06 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: test-time adaptation, diffusion models, generative models, classification, segmentation, depth prediction
TL;DR: We introduce Diffusion-TTA, an effective plug-and-play method for test time adaptation that uses generative feedback from a pre-trained diffusion model to adapt large-scale pre-trained discriminative models.
Abstract: The advancements in generative modeling, particularly the advent of diffusion models, have sparked a fundamental question: how can these models be effectively used for discriminative tasks? In this work, we find that generative models can be great test-time adapters for discriminative models. Our method, Diffusion-TTA, adapts pre-trained discriminative models such as image classifiers, segmenters and depth predictors, to each unlabelled example in the test set using generative feedback from a diffusion model. We achieve this by modulating the conditioning of the diffusion model using the output of the discriminative model. We then maximize the image likelihood objective by backpropagating the gradients to discriminative model’s parameters. We show Diffusion-TTA significantly enhances the accuracy of various large-scale pre-trained discriminative models, such as, ImageNet classifiers, CLIP models, image pixel labellers and image depth predictors. Diffusion-TTA outperforms existing test-time adaptation methods, including TTT-MAE and TENT, and particularly shines in online adaptation setups, where the discriminative model is continually adapted to each example in the test set. We provide access to code, results, and visualizations on our website: diffusion-tta.github.io/
Submission Number: 8895
Loading