MMEDIT: A Unified Framework for Multi-Type Audio Editing via Audio Language Model
By: Ye Tao , Xuenan Xu , Wen Wu and more
Text-guided audio editing aims to modify specific acoustic events while strictly preserving non-target content. Despite recent progress, existing approaches remain fundamentally limited. Training-free methods often suffer from signal degradation caused by diffusion inversion, while training-based methods, although achieving higher generation quality, are severely constrained by the scarcity of high-quality paired data and task formulations that cover only a narrow subset of editing operations. In addition, standard architectures typically decouple text and audio processing, limiting the ability to align instructions with specific acoustic contexts. To address these challenges, we propose MMEdit, an audio-language-model-driven framework for unified audio editing. We systematically extend task definitions to cover a comprehensive range of editing operations, including addition, replacement, removal, reordering, and attribute modification. Furthermore, we design a scalable data synthesis pipeline to construct large-scale paired datasets with fine-grained event-level annotations. To capture complex editing semantics, we integrate a Qwen2-Audio encoder with an MMDiT-based generator, enabling precise cross-modal alignment and localized editing. Experimental results demonstrate that our method achieves superior editing localization accuracy, robust instruction following, and high fidelity in non-edited regions.
Similar Papers
AV-Edit: Multimodal Generative Sound Effect Editing via Audio-Visual Semantic Joint Control
Multimedia
Changes video sounds using pictures and words.
RFM-Editing: Rectified Flow Matching for Text-guided Audio Editing
Sound
Changes sounds in audio using just words.
Audio-Guided Visual Editing with Complex Multi-Modal Prompts
CV and Pattern Recognition
Lets you edit pictures using sounds and words.