Meta Learning for Code SummarizationDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Source code summarization is the task of generating a high-level natural language description for a segment of programming languagecode. Current neural models for the task differ in their architecture and the aspects of code they consider. In this paper, we show that threeSOTA models for code summarization work well on largely disjoint subsets of a large code-base. This complementarity motivates modelcombination: We propose three meta-models that select the best candidate summary for a given code segment. The two neural models improve significantly over the performance of the best individual model, obtaining an improvement of 2.1 BLEU points on a dataset ofcode segments where at least one of the individual models obtains a non-zero BLEU.
0 Replies

Loading