Deciphering Deep Reinforcement Learning: Towards Explainable Decision-Making in Optical Networks

Published: 01 Jan 2024, Last Modified: 24 Mar 2025HPSR 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In recent years, applying Deep Reinforcement Learning (DRL) techniques for optical networks has emerged as an innovative approach, imitating human cognitive processes and enabling machines to learn, reason, and make decisions autonomously. However, a significant challenge in implementing these techniques lies in their inherent opacity, often rendering them as ‘black box’ models, which makes it challenging to comprehend the rationale behind specific actions. Nonetheless, actions must be taken as clearly as possible for effective control and resource management decisions in optical networks. This paper is dedicated to addressing this challenge by providing a framework for generating comprehensive explanations based on the decision-making process of DRL agents. We employ imitation learning to train a random forest classifier to achieve interpretability, leveraging insights from a robust reinforcement learning non-linear agent tailored for elastic optical networks. Through this approach, we attain a level of explainability that empowers us to decipher and understand the decisions made by the DRL agent, thus enhancing our ability to manage optical networks effectively.
Loading