Application of eXplainable AI and causal inference methods to estimation algorithms in networks of dynamic systems
Abstract: While continuous progress in the area of machine learning is producing algorithms capable of achieving better and better decision and predictive performance, the way such algorithms operate is also becoming more and more inscrutable. When an increasing amount of decisions is being ceded to often inexplicable algorithms which are not susceptible to any form of human supervision or scrutiny, it is just natural to start raising doubts about their fairness, soundness, and reliability. This has motivated a growing need for tools capable of disentangling and explaining the mechanisms behind AI based decisions, creating a new field of research referred to as eXplainable AI (XAI). Given the significant impact that machine learning is having also on the area of estimation and control, this article advances the idea of borrowing and adapting methodologies from the area of XAI and apply them to estimation and control algorithms involving networks of dynamic processes. Specifically, we translate the methodology known as Local Interpretable Model-Agnostic Explanations (LIME) in order to explain the mechanisms behind a black-box estimation algorithm processing time-series. Furthermore, we find that LIME can be extended using notions of causal inference to detect cause-effect relations among the features that the estimation algorithm takes as inputs. This causal inference procedure provides LIME with additional explanatory power.
Loading