Abstract: Feature attribution techniques are crucial for interpreting machine learning models, but practitioners often face difficulties to understand how different methods compare and which one best fits their analytical goals. This difficulty arises from inconsistent results across methods, evaluation metrics that emphasize distinct and sometimes conflicting properties, and subjective preferences that influence how explanation quality is interpreted. In this paper, we introduce Explainalytics, an open-source Python library that transforms this challenging decision-making process into an evidence-based visual analytics workflow. Explainalytics calculates a range of evaluation metrics and presents the results through five coordinated views spanning global to local analysis. Linked filtering, dynamic updates, and brushing allow users to pivot fluidly between global trends and local details, supporting exploratory sense-making rather than rigid pipelines. In a within-subject laboratory study with 10 machine learning practitioners, we compared Explainalytics against a baseline. Explainalytics users experienced significantly lower cognitive workload and higher perceived usability.
Loading