Abstract: Despite the superior performance of deep learning-based controllers in network applications, their practical adoption is limited due to the difficulty in understanding and trusting them. Existing explainability solutions largely focus on interpreting these controllers by providing insights into the top features used by the model. Although these insights can help reveal an important aspect of the controller, they require operators to deal with low-level features, requiring extensive manual analysis and interpretation.In this work, we present a novel explainability approach that provides insights to operators using high-level, human-understandable concepts (e.g., 'fluctuating network throughput'). Our approach offers an intuitive platform for operators to identify unintended behaviors, develop strategies to address them, and define data collection strategies to implement them. Our concept-based explainability framework lays the foundation for an intelligent AI system where operators can design the controller they intend using familiar terminology and domain knowledge. We provide an initial implementation of our ideas in adaptive video streaming and demonstrate its potential.
Loading