REPAIR: Robust Editing via Progressive Adaptive Intervention and Reintegration
By: Yisu Wang , Ming Wang , Haoyuan Song and more
Potential Business Impact:
Fixes AI mistakes without breaking other knowledge.
Post-training for large language models (LLMs) is constrained by the high cost of acquiring new knowledge or correcting errors and by the unintended side effects that frequently arise from retraining. To address these issues, we introduce REPAIR (Robust Editing via Progressive Adaptive Intervention and Reintegration), a lifelong editing framework designed to support precise and low-cost model updates while preserving non-target knowledge. REPAIR mitigates the instability and conflicts of large-scale sequential edits through a closed-loop feedback mechanism coupled with dynamic memory management. Furthermore, by incorporating frequent knowledge fusion and enforcing strong locality guards, REPAIR effectively addresses the shortcomings of traditional distribution-agnostic approaches that often overlook unintended ripple effects. Our experiments demonstrate that REPAIR boosts editing accuracy by 10%-30% across multiple model families and significantly reduces knowledge forgetting. This work introduces a robust framework for developing reliable, scalable, and continually evolving LLMs.
Similar Papers
Understanding Robustness of Model Editing in Code LLMs: An Empirical Study
Software Engineering
Makes computer code programs work better.
Automated Program Repair of Uncompilable Student Code
Software Engineering
Fixes broken student code for better learning.
RelRepair: Enhancing Automated Program Repair by Retrieving Relevant Code
Software Engineering
Helps computers fix software bugs using project details.