Representation Interventions Enable Lifelong Unstructured Knowledge Control
By: Xuyuan Liu , Zhengzhang Chen , Xinshuai Dong and more
Potential Business Impact:
Updates AI knowledge without full retraining.
Large language models (LLMs) often produce incorrect or outdated content. Updating their knowledge efficiently and accurately without costly retraining is a major challenge. This problem is especially hard for complex, unstructured knowledge in a lifelong setting, where many edits must coexist without interference. We introduce RILKE (Representation Intervention for Lifelong KnowledgE Control), a robust and scalable method that treats knowledge control as interventions within the model's representation space. Leveraging representation-space expressiveness, we identify two properties enabling RILKE to deliver fine-grained control over complex, unstructured knowledge while maintaining general utility with frozen base weights. During training, RILKE learns paraphrase-robust and edit-localized modules that limit each update to a low-dimensional subspace to minimize cross-edit interference. In inference, a query-adaptive router selects the appropriate module to guide the model's generation. In evaluation on knowledge editing benchmarks with LLaMA and Qwen models, RILKE is scalable to large-scale datasets, demonstrating high edit success, strong paraphrase generalization, and preserving general utility with modest memory overhead. These results show RILKE is an effective and scalable solution for lifelong knowledge control in LLMs.
Similar Papers
Disentangling Knowledge Representations for Large Language Model Editing
Computation and Language
Keeps AI smart without forgetting old facts.
KALE: Enhancing Knowledge Manipulation in Large Language Models via Knowledge-aware Learning
Computation and Language
Helps AI better use what it knows to answer questions.
KnowRL: Teaching Language Models to Know What They Know
Computation and Language
AI learns when it's right or wrong.