Code Summarization: Do Transformers Really Understand Code?Download PDF

Published: 26 Mar 2022, Last Modified: 05 May 2023DL4C 2022Readers: Everyone
Keywords: Code summarization, Code transformation, Code semantics
TL;DR: Study of the effects of code transformations on code summarization, questions the code understanding capacity of Transformers and points to the need for better curated datasets and training strategies to facilitate code understanding.
Abstract: Recent approaches for automatic code summarization rely on fine-tuned transformer-based language Models often injected with program analysis information. We perform empirical studies to analyze the extent to which these models understand the code they attempt to summarize. We observe that these models rely heavily on the textual cues present in comments/function names/variable names and that masking this information negatively impacts the generated summaries. Further, subtle code transformations which drastically alter program logic have no corresponding impact on the generated summaries. Overall, the quality of the generated summaries even from state-of-the-art (SOTA) models is quite poor, raising questions about the utility of current approaches and datasets.
1 Reply

Loading