Keywords: Test-time Adaptation, Human-AI Interaction, Model Robustness, Distribution Shift
TL;DR: We explore test-time adaptation from model-user interaction.
Abstract: We explore user interaction-based test-time adaptation (UITTA), which adapts a model to shifted test distributions with supervision signals from model-user interactions. Model adaptation in TTA can fail since models learn from the noisy pseudo-labels of the test data. UITTA achieves better adaptation from user feedback on top-K predictions within two rounds of simulated interactions. To have real-time adaptation, we further accelerate model optimization by reducing the cost of gradient backpropagation, through random dropping of backward paths. Simulation experiments on cross-lingual transfer, domain generalization, and corruption robustness show that low-cost user feedback can significantly boost TTA in performance, even competing with online active learning which however needs expensive human annotation. By accelerating pre-trained language models, we reduce 70% – 90% backpropagation cost with only a small drop in performance.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
5 Replies
Loading