An Effective Low-Dimensional Software Code Representation using BERT and ELMoDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 09 Nov 2023QRS 2022Readers: Everyone
Abstract: Contextualised word representations (e.g., ELMo and BERT) have been shown to outperform static representations (e.g., Word2vec, Fasttext, and GloVe) for many NLP tasks. In this paper, we investigate the use of contextualised embeddings for code search and classification, an area receiving less attention. We construct CodeELMo by training ELMo from scratch and fine tuning CodeBERT embeddings using masked language modeling based on natural language (NL) texts related to software development concepts and programming language (PL) texts consisting of method comment pairs from open source code bases. The dimensionality of the Finetuned Code BERT embeddings is reduced using linear transformations and augmented with a CodeELMo representation to develop CodeELBE – a lowdimensional contextualised software code representation. Results for binary classification and retrieval tasks show that CodeELBE <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> considerably improves retrieval performance on standard deep code search datasets compared to CodeBERT and baseline BERT models.
0 Replies

Loading