RippleBench: Capturing Ripple Effects Using Existing Knowledge Repositories
By: Roy Rinberg , Usha Bhalla , Igor Shilov and more
Potential Business Impact:
Tests how AI forgets unwanted information without messing up other knowledge.
Targeted interventions on language models, such as unlearning, debiasing, or model editing, are a central method for refining model behavior and keeping knowledge up to date. While these interventions aim to modify specific information within models (e.g., removing virology content), their effects often propagate to related but unintended areas (e.g., allergies); these side-effects are commonly referred to as the ripple effect. In this work, we present RippleBench-Maker, an automatic tool for generating Q&A datasets that allow for the measurement of ripple effects in any model-editing task. RippleBench-Maker builds on a Wikipedia-based RAG pipeline (WikiRAG) to generate multiple-choice questions at varying semantic distances from the target concept (e.g., the knowledge being unlearned). Using this framework, we construct RippleBench-Bio, a benchmark derived from the WMDP (Weapons of Mass Destruction Paper) dataset, a common unlearning benchmark. We evaluate eight state-of-the-art unlearning methods and find that all exhibit non-trivial accuracy drops on topics increasingly distant from the unlearned knowledge, each with distinct propagation profiles. To support ongoing research, we release our codebase for on-the-fly ripple evaluation, along with the benchmark, RippleBench-Bio.
Similar Papers
RapidUn: Influence-Driven Parameter Reweighting for Efficient Large Language Model Unlearning
Computation and Language
Teaches AI to forget bad information quickly.
Towards a Real-World Aligned Benchmark for Unlearning in Recommender Systems
Information Retrieval
Removes your data from recommendation systems quickly.
Towards a Real-World Aligned Benchmark for Unlearning in Recommender Systems
Information Retrieval
Lets apps forget user data without breaking.