An Information-Theoretic Framework for Robust Large Language Model Editing
By: Qizhou Chen , Chengyu Wang , Taolin Zhang and more
Potential Business Impact:
Fixes AI mistakes without re-teaching everything.
Large Language Models (LLMs) have become indispensable tools in science, technology, and society, enabling transformative advances across diverse fields. However, errors or outdated information within these models can undermine their accuracy and restrict their safe deployment. Developing efficient strategies for updating model knowledge without the expense and disruption of full retraining remains a critical challenge. Current model editing techniques frequently struggle to generalize corrections beyond narrow domains, leading to unintended consequences and limiting their practical impact. Here, we introduce a novel framework for editing LLMs, grounded in information bottleneck theory. This approach precisely compresses and isolates the essential information required for generalizable knowledge correction while minimizing disruption to unrelated model behaviors. Building upon this foundation, we present the Information Bottleneck Knowledge Editor (IBKE), which leverages compact latent representations to guide gradient-based updates, enabling robust and broadly applicable model editing. We validate IBKE's effectiveness across multiple LLM architectures and standard benchmark tasks, demonstrating state-of-the-art accuracy and improved generality and specificity of edits. These findings establish a theoretically principled and practical paradigm for open-domain knowledge editing, advancing the utility and trustworthiness of LLMs in real-world applications.
Similar Papers
Topic Identification in LLM Input-Output Pairs through the Lens of Information Bottleneck
Computation and Language
Helps AI tell truth from made-up stories.
EvoEdit: Lifelong Free-Text Knowledge Editing through Latent Perturbation Augmentation and Knowledge-driven Parameter Fusion
Computation and Language
Lets AI learn new things without forgetting old ones.
Latent Knowledge Scalpel: Precise and Massive Knowledge Editing for Large Language Models
Machine Learning (CS)
Fixes computer brains with lots of new facts.