RippleBench: Capturing Ripple Effects by Leveraging Existing Knowledge Repositories

Published: 15 Oct 2025, Last Modified: 24 Nov 2025BioSafe GenAI 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Benchmarks, Unlearning, Bio-weapons
TL;DR: This paper introduces RippleBench, a benchmark and dataset for evaluating the ripple effects in model editing and unlearning.
Abstract: Targeted interventions on language models, such as unlearning, debiasing, or model editing, are a central method for refining model behavior and keeping knowledge up to date. While these interventions aim to modify specific information within models (e.g., removing virology content), their effects often propagate to related but unintended areas (e.g., allergies); these side-effects are commonly referred to as the "ripple effect". In this work, we present `RippleBench-Maker`, an automatic tool for generating Q\&A datasets that allow for the measurement of ripple effects in any model-editing task. `RippleBench-Maker` builds on a Wikipedia-based RAG pipeline (WikiRAG) to generate multiple-choice questions at varying semantic distances from the target concept (e.g., the knowledge being unlearned). Using this framework, we construct `RippleBench-Bio`, a benchmark derived from the WMDP (Weapons of Mass Destruction Paper) dataset, a common unlearning benchmark. We evaluate eight state-of-the-art unlearning methods and find that all exhibit non-trivial accuracy drops on topics increasingly distant from the unlearned knowledge, each with distinct propagation profiles. To support ongoing research, we release our codebase for on-the-fly ripple evaluation, along with the benchmark: `RippleBench-Bio` ($12{,}895$ unique topics).
Submission Number: 14
Loading