Evaluating Machine Learning Models with NERO: Non-Equivariance Revealed on Orbits

Published: 10 Oct 2024, Last Modified: 03 Dec 2024IAI Workshop @ NeurIPS 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Interactive Visualization Systems and Tools, Explainable Machine Learning, Equivariance, Integrated Workflows
TL;DR: We introduce a novel ML evaluation method, "Non-Equivariance Revealed on Orbits" (NERO), which uses interactive interfaces and visualizations to offer in-depth analysis and robustness assessment, validated across multiple applications.
Abstract: Traditional scalar-based error metrics, while quick for assessing machine learning (ML) model performance, often fail to expose weaknesses or offer fair evaluations, particularly with limited test data. To address this growing issue, we introduce "Non-Equivariance Revealed on Orbits" (NERO), a novel evaluation procedure that enhances model analysis through assessing equivariance and robustness. NERO combines a task-agnostic interactive interface with a suite of visualizations to deeply analyze and improve model interpretability. We validate the effectiveness of NERO across various applications, including 2D digit recognition, object detection, particle image velocimetry (PIV), and 3D point cloud classification. Our case studies demonstrate the ability of NERO to clearly depict model equivariance and provide detailed insights into model outputs. Additionally, we introduce "consensus" as an alternative to traditional ground truths, expanding NERO to unlabeled datasets and enabling broader applications in diverse ML contexts.
Track: Main track
Submitted Paper: No
Published Paper: No
Submission Number: 35
Loading