Score: 2

RippleBench: Capturing Ripple Effects Using Existing Knowledge Repositories

Published: December 3, 2025 | arXiv ID: 2512.04144v1

By: Roy Rinberg , Usha Bhalla , Igor Shilov and more

Potential Business Impact:

Tests how AI forgets unwanted information without messing up other knowledge.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Targeted interventions on language models, such as unlearning, debiasing, or model editing, are a central method for refining model behavior and keeping knowledge up to date. While these interventions aim to modify specific information within models (e.g., removing virology content), their effects often propagate to related but unintended areas (e.g., allergies); these side-effects are commonly referred to as the ripple effect. In this work, we present RippleBench-Maker, an automatic tool for generating Q&A datasets that allow for the measurement of ripple effects in any model-editing task. RippleBench-Maker builds on a Wikipedia-based RAG pipeline (WikiRAG) to generate multiple-choice questions at varying semantic distances from the target concept (e.g., the knowledge being unlearned). Using this framework, we construct RippleBench-Bio, a benchmark derived from the WMDP (Weapons of Mass Destruction Paper) dataset, a common unlearning benchmark. We evaluate eight state-of-the-art unlearning methods and find that all exhibit non-trivial accuracy drops on topics increasingly distant from the unlearned knowledge, each with distinct propagation profiles. To support ongoing research, we release our codebase for on-the-fly ripple evaluation, along with the benchmark, RippleBench-Bio.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
19 pages

Category
Computer Science:
Artificial Intelligence