Hierarchical Code Embeddings with Multi-Level Attention for Reinforcement Learning State Representation

ICLR 2026 Conference Submission25398 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multi-Level Attention
Abstract: \begin{abstract} In this paper, we propose novel state representation and reinforcement learning (RL) system of encoding the semantics of code hierarchically using multiple attention mechanisms. Traditional approaches regularly address code embeddings as flat sequences or to be reliant only on graph-based representations, which don't capture the complex level of interplay between local and global code features. The proposed method incorporate token-level, function-level, and module-level attention using graph-structured dependencies, to allow the RL agent to reason about code at varying granularities while maintaining structural relationships \end{abstract}
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 25398
Loading