Diff-TONE: Timestep Optimization for iNstrument Editing in Text-to-Music Diffusion Models
By: Teysir Baoueb , Xiaoyu Bie , Xi Wang and more
Potential Business Impact:
Changes music instruments without ruining the song.
Breakthroughs in text-to-music generation models are transforming the creative landscape, equipping musicians with innovative tools for composition and experimentation like never before. However, controlling the generation process to achieve a specific desired outcome remains a significant challenge. Even a minor change in the text prompt, combined with the same random seed, can drastically alter the generated piece. In this paper, we explore the application of existing text-to-music diffusion models for instrument editing. Specifically, for an existing audio track, we aim to leverage a pretrained text-to-music diffusion model to edit the instrument while preserving the underlying content. Based on the insight that the model first focuses on the overall structure or content of the audio, then adds instrument information, and finally refines the quality, we show that selecting a well-chosen intermediate timestep, identified through an instrument classifier, yields a balance between preserving the original piece's content and achieving the desired timbre. Our method does not require additional training of the text-to-music diffusion model, nor does it compromise the generation process's speed.
Similar Papers
Generation of Musical Timbres using a Text-Guided Diffusion Model
Sound
Creates music notes from text descriptions.
Rethinking Direct Preference Optimization in Diffusion Models
CV and Pattern Recognition
Makes AI pictures match what people want.
Diffusion Timbre Transfer Via Mutual Information Guided Inpainting
Sound
Changes music's sound without retraining.