Graph-Energy Reinforcement Learning: Adaptive Reward Design for API Usage Pattern Mining with OOD Detection
Keywords: OOD Detection
Abstract: \begin{abstract}
We propose its a novel framework Graph-Energy Reinforcement Learning (GERL), in which the goal is
in the case of mining API usage patterns with robust out of distribution (OOD)
detection capabilities. Growing complexity of API ecosystems
demands adaptive methods to differentiate between in-distribution and
anomalous patterns, however, often existing approaches rely on static
thresholds or do not have structural awareness. GERL addresses this by
integrating energy based OOD scoring with graph diffusion in a
reinforcement learning (RL) framework, which makes it possible to dynamically design rewards
which guides exploration in graph-structured API Spaces. The core
innovation lies the in Graph-Energy Reward Function which combines;
node level energy scores calculated using a Graph Neural Network with
multi-hop topological dependencies as represented by diffusion. This
joint formulation gives freedom for RL agent to change the exploitative
of known patterns and discovering of novel ones, while the policy network,
built on Transformer-XL, processes variable length API sequences with
structural context. In addition, using a graph-based Markov Decision Processes
creates realistic scenarios of API use, transitions modeled by a
Graph Variational Auto Encoder for Predicting Likely Subgraph Evolutions.
Experiments show that in compared with conventional methods, GERL is more
both pattern mining accuracy as well as OOD detection robustness, particularly
when making recursive or many-hop applications of the API
\end{abstract}
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 25404
Loading