RippleBench: Capturing Ripple Effects by Leveraging Existing Knowledge Repositories

Published: 30 Sept 2025, Last Modified: 30 Sept 2025Mech Interp Workshop (NeurIPS 2025) SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Interpretability tooling and software, Benchmarking interpretability, Other
Other Keywords: Benchmarks, Unlearning
TL;DR: This paper introduces RippleBench, a benchmark and dataset for evaluating the ripple effects in model editing and unlearning.
Abstract: The ability to make targeted updates to models, whether for unlearning, debiasing, model editing, or safety alignment, is central to AI safety. While these interventions aim to modify specific knowledge (e.g., removing virology content), their effects often propagate to related but unintended areas (e.g., allergies). Due to lack of standardized tools, existing evaluations typically compare performance on targeted versus unrelated general tasks, overlooking this broader collateral impact called the "ripple effect". We introduce **RippleBench**, a benchmark for systematically measuring how interventions affect semantically related knowledge. Using **RippleBench**, built on top of a Wikipedia-RAG pipeline for generating multiple-choice questions, we evaluate eight state-of-the-art unlearning methods. We find that all methods exhibit non-trivial accuracy drops on topics increasingly distant from the unlearned knowledge, each with distinct propagation profiles. We release our codebase for on-the-fly ripple evaluation as well as RippleBench-WMDP-Bio, a dataset derived from WMDP biology, containing 9,888 unique topics and 49,247 questions.
Submission Number: 104
Loading