Fed-Energy: Federated Reinforcement Learning for Scalable and Energy-Efficient Large-Scale Code Optimization
Keywords: Large-Scale Code Optimization
Abstract: \begin{abstract}
We propose \textbf{Fed-Energy}, a federated reinforcement learning (RL) framework for scalable and energy-efficient large-scale code optimization. Runaway mass: Modern code optimization contains two conflicting goals: computational burden of training model by RL and lack of estimation of energy consumption for wide variety of codebases. The proposed method solves these in the combination of lightweight energy models and federated learning to achieve distributed training and adaptive aggregation of local energy predictors. Each code component utilizes mini-sized neural networks to estimate the amount of energy a program uses from its execution traces and/or its structural features as LSTMs or CNN, and then combines such estimates from a personalized federated approach that takes into consideration non-IID data distributions. The RL system optimizes decay of program code transformations considering composite rewards with energy, performance, and computation overhead trades, while compiler pipelines and dynamic profilers are used to provide feedback for refinement. Fed-Energy's decentralized design avoids monolithic simulators, not only easing the computational workload, but also maintaining privacy and scalability. Moreover, its spatial-temporal adaptive coordination makes it more different from static federated averaging, and this adaptive coordination facilitates optimization on the basis of context-awareness to heterogeneous code structures. Experiments show gainful improvements in energy efficiency and training scalability, as compared with centralized methods, which makes it a feasible solution towards real world deployment. The novelty of the framework is the joint approach of federated learning and RL, and it provides a scalable and accurate alternative to traditional energy-aware code optimization.
\end{abstract}
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 25428
Loading