Score: 2

STAGE: Stemmed Accompaniment Generation through Prefix-Based Conditioning

Published: April 8, 2025 | arXiv ID: 2504.05690v2

By: Giorgio Strano , Chiara Ballanti , Donato Crisostomi and more

Potential Business Impact:

Helps musicians add music to their songs.

Business Areas:
Musical Instruments Media and Entertainment, Music and Audio

Recent advances in generative models have made it possible to create high-quality, coherent music, with some systems delivering production-level output. Yet, most existing models focus solely on generating music from scratch, limiting their usefulness for musicians who want to integrate such models into a human, iterative composition workflow. In this paper we introduce STAGE, our STemmed Accompaniment GEneration model, fine-tuned from the state-of-the-art MusicGen to generate single-stem instrumental accompaniments conditioned on a given mixture. Inspired by instruction-tuning methods for language models, we extend the transformer's embedding matrix with a context token, enabling the model to attend to a musical context through prefix-based conditioning. Compared to the baselines, STAGE yields accompaniments that exhibit stronger coherence with the input mixture, higher audio quality, and closer alignment with textual prompts. Moreover, by conditioning on a metronome-like track, our framework naturally supports tempo-constrained generation, achieving state-of-the-art alignment with the target rhythmic structure--all without requiring any additional tempo-specific module. As a result, STAGE offers a practical, versatile tool for interactive music creation that can be readily adopted by musicians in real-world workflows.

Country of Origin
🇮🇹 Italy

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Sound