On the Importance of Data Size in Probing Fine-tuned ModelsDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Several studies have investigated the reasons behind the effectiveness of fine-tuning, usually through the lens of probing. However, these studies often neglect the role of the size of the dataset on which the model is fine-tuned. In this paper, we highlight the importance of this factor and its undeniable role in probing performance. We show that the extent of encoded linguistic knowledge depends on the number of fine-tuning samples, specifically the number of iterations for which the model is updated. The analysis also reveals that larger training data mainly affects higher layers, and that the extent of this change is a factor of the number of iterations in fine-tuning rather than the diversity of the training samples. Finally, we show through a set of experiments that fine-tuning introduces shallow and recoverable changes to model's representation.
0 Replies

Loading