Analysis of the User Experience with a Multiperspective Tool for Explainable Machine Learning in Light of Interactive Principles
Abstract: Machine Learning (ML) models have been widely used nowadays, as "magical black boxes", in many different domains and for distinct goals, but the way they generate their results is not fully understood yet, including by knowledgeable users. If users cannot interpret or trust the prediction generated by the model, they will not use it. Furthermore, the human role is often not (properly) considered for the developing of ML systems. In this article, we present a discussion about user centered development of Interactive Machine Learning systems. We ground our discussion on Explain-ML, a multi-perspective human-centered machine learning tool that assists users in the building, refinement and interpretability of ML models. To do so, we have conducted an analysis of the results of the evaulation of Explain-ML with potential users in light of the principles for Interactive ML systems design. Our results contribute to the understanding and consolidation of these principles. Moreover, the experience gained from the discussion about Explain-ML, its user centered development and its evaluation with users, based on these principles, is relevant to the research and development of Interactive ML Tools that involve ML explainability.
Loading