Score: 1

Hierarchical Orthogonal Residual Spread for Precise Massive Editing in Large Language Models

Published: January 16, 2026 | arXiv ID: 2601.11441v1

By: Xiaojie Gu , Guangxu Chen , Yuheng Yang and more

Potential Business Impact:

Fixes AI mistakes without breaking other skills.

Business Areas:
Semantic Search Internet Services

Large language models (LLMs) exhibit exceptional performance across various domains, yet they face critical safety concerns. Model editing has emerged as an effective approach to mitigate these issues. Existing model editing methods often focus on optimizing an information matrix that blends new and old knowledge. While effective, these approaches can be computationally expensive and may cause conflicts. In contrast, we shift our attention to Hierarchical Orthogonal Residual SprEad of the information matrix, which reduces noisy gradients and enables more stable edits from a different perspective. We demonstrate the effectiveness of our method HORSE through a clear theoretical comparison with several popular methods and extensive experiments conducted on two datasets across multiple LLMs. The results show that HORSE maintains precise massive editing across diverse scenarios. The code is available at https://github.com/XiaojieGu/HORSE

Repos / Data Links

Page Count
5 pages

Category
Computer Science:
Computation and Language