Predicting Fine-Tuning Performance with ProbingDownload PDF

Published: 09 Apr 2022, Last Modified: 22 Oct 2023BigScience#5Readers: Everyone
Keywords: probing, fine-tuning, large NLP models
TL;DR: We show that probing results can be used to predict fine-tuning performances.
Abstract: Large NLP models have recently shown impressive performance in language understanding tasks, typically evaluated by fine-tuning tasks. Alternatively, probing has received increasing attention as being a lightweight method for interpreting the intrinsic mechanisms of large NLP models. In probing, post-hoc classifiers are trained on ``out-of-domain'' datasets that diagnose specific abilities. While probing the language models has led to insightful findings, they appear disjointed from the development of models. This paper explores the utility of probing deep NLP models to extract a proxy signal widely used in model developments, the fine-tuning performance. We find that it is possible to use the accuracies of only three probing results to predict the fine-tuning performance with errors 40% - 80% smaller than baselines. We further show the possibility of incorporating specialized probing datasets into developing deep NLP models.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2210.07352/code)
1 Reply

Loading