Abstract: Machine learning models are being increasingly deployed to take, or assist in taking, complicated and high-impact decisions, from quasi-autonomous vehicles to clinical decision support systems. This poses challenges, particularly when models have hard-to-detect failure modes and are able to take actions without oversight. In order to handle this challenge, we propose a method for a collaborative system that remains safe by having a human ultimately making decisions, while giving the model the best opportunity to convince and debate them with interpretable explanations. However, the most helpful explanation varies among individuals and may be inconsistent across stated preferences. To this end we develop an algorithm, Ardent, to efficiently learn a ranking through interaction and best assist humans complete a task. By utilising a collaborative approach, we can ensure safety and improve performance while addressing transparency and accountability concerns. Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations, which we validate through extensive simulations alongside a user study involving a challenging image classification task, demonstrating consistent improvement over competing systems.
Submission Track: Full Paper Track
Application Domain: None of the above / Not applicable
Clarify Domain: We present a domain agnostic method for safe decision support that incorporates explainability methods
Survey Question 1: Machine learning models, which are increasingly aiding complex decision-making, should be limited to not act without human oversight given the potential risks. To ensure safety, we designed a system where humans make the final decision but the system can provide a model's predictions alongside explanations to inform them. Our algorithm, Ardent, learns which explanations to pick based on individual preferences, ensuring better collaboration and decision-making.
Survey Question 2: Incorporating explainability was crucial because without it, users might not trust or understand the decisions suggested by machine learning models. Methods lacking explainability can lead to blind reliance on algorithms, potential undetected errors, and reduced accountability in high-stakes decisions.
Survey Question 3: Our method is individual technique agnostic, although in some of our demonstrations we use: Integrated Gradients, DeepLIFT, SimplEx, Nearest-Neighbour, and Occlusion Maps.
Submission Number: 25
Loading