Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations
Abstract: Explanations for computer vision models are important tools for
interpreting how the underlying models work. However, they are
often presented in static formats, which pose challenges for users,
including information overload, a gap between semantic and pixel-level information, and limited opportunities for exploration. We
investigate interactivity as a mechanism for tackling these issues in
three common explanation types: heatmap-based, concept-based,
and prototype-based explanations. We conducted a study (N=24),
using a bird identification task, involving participants with diverse
technical and domain expertise. We found that while interactivity
enhances user control, facilitates rapid convergence to relevant
information, and allows users to expand their understanding of
the model and explanation, it also introduces new challenges. To
address these, we provide design recommendations for interactive
computer vision explanations, including carefully selected default
views, independent input controls, and constrained output spaces.
Loading