On-the-Fly Adaptation of Source Code ModelsDownload PDF

Published: 03 Nov 2020, Last Modified: 05 May 2023NeurIPS 2020 CAP WorkshopReaders: Everyone
Keywords: adaptation, source code, code auto-completion
TL;DR: We propose a technique which selects targeted information from the test file and learns adapted parameters which are used to predict a hole in the file.
Abstract: The ability to adapt to unseen, local contexts is an important challenge that successful models of source code must overcome. One of the most popular approaches for the adaptation of such models is dynamic evaluation. With dynamic evaluation, when running a model on an unseen file, the model is updated immediately after having observed each token in that file. In this work, we propose instead to approach this problem in two steps: (a) We select targeted information (\textit{support tokens}) from the given context; (b) We use these support tokens to learn adapted parameters which are then used to predict the target hole. We refer to our proposed framework as Targeted Support Set Adaptation (TSSA). We consider an evaluation setting that we call \textit{line-level maintenance}, designed to reflect the downstream task of code auto-completion in an IDE. We demonstrate improved performance in experiments on a large scale Java GitHub corpus, compared to other adaptation baselines including dynamic evaluation. Moreover, our analysis shows that, compared to a non-adaptive baseline, our approach improves performance on identifiers and literals by 44% and 19%, respectively.
2 Replies

Loading