An AST Structure Enhanced Decoder for Code Generation

Published: 2022, Last Modified: 04 Nov 2025IEEE ACM Trans. Audio Speech Lang. Process. 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Currently, the most dominant neural code generation modelsare often equipped with a tree-structured LSTM decoder, which outputs a sequence of actions to construct an Abstract Syntax Tree (AST) via pre-order traversal. However, such a decoder has two obvious drawbacks. First, except for the parent action, other faraway and important history actions rarely contribute to the current decision. Second, it also neglects future actions, which may be crucial for the prediction of the current action. To deal with these issues, in this paper, we propose a novel AST structure enhanced decoder for code generation, which significantly extends the decoder with respect to the above two aspects. First, we introduce an AST information enhanced attention mechanism to fully exploit history actions, of which impacts are further distinguished according to their syntactic distances, action types and relative positions; Second, we jointly model the predictions of current action and its important future action via multi-task learning, where the learned hidden state of the latter can be further leveraged to improve the former. Experimental results on commonly-used datasets demonstrate the effectiveness of our proposed decoder.<sup>1</sup>
Loading