Test-Time AutoEval with Supporting Self-supervisionDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Test-Time, AutoEval, Self-supervised Learning
TL;DR: A new framework for unsupervised model evaluation without touching training sets
Abstract: The Automatic Model Evaluation (AutoEval) framework entertains the possibility of evaluating a trained machine learning model without resorting to a labeled testing set, which commonly isn’t accessible nor provided in real-world scenarios. Existing AutoEval methods always rely on computing distribution shift between the unlabelled testing set and the training set. However, this lines of work cannot fit well in some real-world ML applications like edge computing boxes where the original training set is inaccessible. Contrastive Learning (CL) is an efficient self-supervised learning task, which can learn helpful visual representations for down-stream classification tasks. In our work, we surprisingly find that CL accuracy and classification accuracy can build strong linear correlation ($r > 0.88$). This finding motivates us to regress classification accuracy with CL accuracy. In our experiments, we show that without touching training sets, our framework can achieve results comparable to SOTA AutoEval baselines. Besides, our subsequent experiments demonstrate that different CL approaches and model structures can easily fit into our framework.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: General Machine Learning (ie none of the above)
5 Replies

Loading