TL;DR: The paper introduces the novel framework of Explanatory Performance Estimation (XPE) that attributes an anticipated performance decline of an ML model under distribution shifts to interpretable features enabling actionable insights.
Abstract: Monitoring machine learning systems and efficiently recovering their reliability after performance degradation are two of the most critical issues in real-world applications. However, current monitoring strategies lack the capability to provide actionable insights answering the question of why the performance of a particular model really degraded. To address this, we propose Explanatory Performance Estimation (XPE) as a novel method that facilitates more informed model monitoring and maintenance by attributing an estimated performance change to interpretable input features. We demonstrate the superiority of our approach compared to natural baselines on different data sets. We also discuss how the generated results lead to valuable insights that can reveal potential root causes for model deterioration and guide toward actionable countermeasures.
Submission Track: Full Paper Track
Application Domain: Computer Vision
Survey Question 1: We propose Explanatory Performance Estimation (XPE) as a novel method that facilitates more informed model monitoring and maintenance by attributing an estimated performance change to interpretable input features.
Survey Question 2: We aim to include explainability methods into ML systems to enable the user to take more informed decisions for instance to enhance the monitoring and maintenance of ML systems. Furthermore, explainability plays a crucial role to increase the trustworthiness in ML systems, especially for high-stakes applications.
Survey Question 3: In this work, we utilize Shapley values due to their beneficial theoretical properties. In general, the method used is highly dependent on the application and the data modality, which requires a case-specific selection.
Submission Number: 31
Loading