Spectral Characterization and Mitigation of Sequential Knowledge Editing Collapse
By: Chi Zhang , Mengqi Zhang , Xiaotian Ye and more
Potential Business Impact:
Keeps AI smart when learning new things.
Sequential knowledge editing in large language models often causes catastrophic collapse of the model's general abilities, especially for parameter-modifying methods. Existing approaches mitigate this issue through heuristic constraints on parameter updates, yet the mechanisms underlying such degradation remain insufficiently understood. In this work, we present a spectral analysis of sequential knowledge editing and show that a model's general abilities are closely associated with dominant singular directions of pretrained weight matrices. These directions are highly sensitive to perturbations and are progressively disrupted by repeated edits, closely tracking the collapse in both editing efficacy and general performance. Building on this insight, we propose REVIVE, a plug-and-play framework that stabilizes sequential editing by explicitly preserving the dominant singular subspace. REVIVE represents parameter updates in the spectral basis of the original weights and filters components that would interfere with the protected region. Extensive experiments across multiple models and benchmarks show that REVIVE consistently improves editing efficacy while substantially preserving general abilities under long-horizon sequential editing, including extreme settings with up to 20,000 edits.
Similar Papers
Lifelong Knowledge Editing requires Better Regularization
Computation and Language
Fixes AI mistakes without breaking other knowledge.
Norm Growth and Stability Challenges in Localized Sequential Knowledge Editing
Computation and Language
Fixes AI when it learns new facts.
DeltaEdit: Enhancing Sequential Editing in Large Language Models by Controlling Superimposed Noise
Computation and Language
Keeps AI smart and accurate with many updates.