Score: 1

Balancing Knowledge Updates: Toward Unified Modular Editing in LLMs

Published: October 31, 2025 | arXiv ID: 2510.27400v1

By: Jiahao Liu , Zijian Wang , Kuo Zhao and more

Potential Business Impact:

Fixes AI mistakes in more places.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Knowledge editing has emerged as an efficient approach for updating factual knowledge in large language models (LLMs). It typically locates knowledge storage modules and then modifies their parameters. However, most existing methods focus on the weights of multilayer perceptron (MLP) modules, which are often identified as the main repositories of factual information. Other components, such as attention (Attn) modules, are often ignored during editing. This imbalance can leave residual outdated knowledge and limit editing effectiveness. We perform comprehensive knowledge localization experiments on advanced LLMs and find that Attn modules play a substantial role in factual knowledge storage and retrieval, especially in earlier layers. Based on these insights, we propose IntAttn-Edit, a method that extends the associative memory paradigm to jointly update both MLP and Attn modules. Our approach uses a knowledge balancing strategy that allocates update magnitudes in proportion to each module's measured contribution to knowledge storage. Experiments on standard benchmarks show that IntAttn-Edit achieves higher edit success, better generalization, and stronger knowledge preservation than prior methods. Further analysis shows that the balancing strategy keeps editing performance within an optimal range across diverse settings.

Country of Origin
🇦🇺 🇨🇳 Australia, China

Page Count
16 pages

Category
Computer Science:
Computation and Language