Keywords: Reinforcement Learning, Action Selection, Cost Optimization, Shapely
Abstract: Training action space selection for reinforcement learning (RL) is conflict-prone due to complex state-action relationships. To address this challenge, this paper proposes a Shapely-inspired methodology for training action space categorization and ranking. To reduce exponential-time Shapely computations, the methodology includes a Monte Carlo simulation to avoid unnecessary explorations. The effectiveness of the methodology is illustrated using a cloud infrastructure resource tuning case study. It reduces the search space by 80% and categorizes the training action sets into dispensable and indispensable groups. Additionally, it ranks different training actions to facilitate superior RL model performance and lower cost. The proposed data-driven methodology is extensible to different domains, use cases, and machine learning algorithms.
One-sentence Summary: A data-driven framework to optimize training action selection for reinforcement learning
13 Replies
Loading