Use Perturbations when Learning from Explanations

Published: 27 Oct 2023, Last Modified: 27 Oct 2023NeurIPS XAIA 2023EveryoneRevisionsBibTeX
TL;DR: We recast Machine learning from explanations (MLX) as a robustness problem and show both theoretically and empirically how this approach is better suited for learning from explanations.
Abstract: Machine learning from explanations (MLX) is an approach to learning that uses human-provided explanations of relevant or irrelevant features for each input to ensure that model predictions are right for the right reasons. Existing MLX approaches rely on local model interpretation methods and require strong model smoothing to align model and human explanations, leading to sub-optimal performance. We recast MLX as a robustness problem, where human explanations specify a lower dimensional manifold from which perturbations can be drawn, and show both theoretically and empirically how this approach alleviates the need for strong model smoothing. We consider various approaches to achieving robustness, leading to improved performance over prior MLX methods. Finally, we show how to combine robustness with an earlier MLX method, yielding state-of-the-art results on both synthetic and real-world benchmarks.
Submission Track: Full Paper Track
Application Domain: Computer Vision
Survey Question 1: We developed a method to improve machine learning models by using human-provided explanations about which features of data are relevant or not. Traditional methods required strong model smoothing to align model and human explanations that could hinder a model's performance, but we reimagined this process as a robustness challenge, which reduced the need for those requirements. Explainability plays a pivotal role, guiding the learning process with these human explanations, leading to more accurate and trustworthy model predictions.
Survey Question 2: Our motivation for incorporating explainability was to ensure that machine learning predictions are accurate for the right reasons by leveraging human insights. Methods that lack explainability often face challenges in aligning model predictions with human understanding, which can lead to sub-optimal performance and reduced trust in the model's decisions.
Survey Question 3: We use Grad (simple gradient) in our method, and apply SmoothGrad to visualize the importance scores.
Submission Number: 27