HyperEdit: Unlocking Instruction-based Text Editing in LLMs via Hypernetworks
By: Yiming Zeng , Jinghan Cao , Zexin Li and more
Potential Business Impact:
Fixes computer code with fewer mistakes.
Instruction-based text editing is increasingly critical for real-world applications such as code editors (e.g., Cursor), but Large Language Models (LLMs) continue to struggle with this task. Unlike free-form generation, editing requires faithfully implementing user instructions while preserving unchanged content, as even minor unintended modifications can break functionality. Existing approaches treat editing as generic text generation, leading to two key failures: they struggle to faithfully align edits with diverse user intents, and they often over-edit unchanged regions. We propose HyperEdit to address both issues. First, we introduce hypernetwork-based dynamic adaptation that generates request-specific parameters, enabling the model to tailor its editing strategy to each instruction. Second, we develop difference-aware regularization that focuses supervision on modified spans, preventing over-editing while ensuring precise, minimal changes. HyperEdit achieves a 9%--30% relative improvement in BLEU on modified regions over state-of-the-art baselines, despite utilizing only 3B parameters.
Similar Papers
EvoEdit: Lifelong Free-Text Knowledge Editing through Latent Perturbation Augmentation and Knowledge-driven Parameter Fusion
Computation and Language
Lets AI learn new things without forgetting old ones.
Envisioning Future Interactive Web Development: Editing Webpage with Natural Language
Software Engineering
Lets computers change website designs by talking.
Resolving UnderEdit & OverEdit with Iterative & Neighbor-Assisted Model Editing
Computation and Language
Updates AI knowledge without breaking other facts.