Fine-Grained control over Music Generation with Activation Steering
By: Dipanshu Panda , Jayden Koshy Joe , Harshith M R and more
Potential Business Impact:
Changes music's sound, style, and genre.
We present a method for fine-grained control over music generation through inference-time interventions on an autoregressive generative music transformer called MusicGen. Our approach enables timbre transfer, style transfer, and genre fusion by steering the residual stream using weights of linear probes trained on it, or by steering the attention layer activations in a similar manner. We observe that modelling this as a regression task provides improved performance, hypothesizing that the mean-squared-error better preserve meaningful directional information in the activation space. Combined with the global conditioning offered by text prompts in MusicGen, our method provides both global and local control over music generation. Audio samples illustrating our method are available at our demo page.
Similar Papers
Activation Patching for Interpretable Steering in Music Generation
Sound
Controls music's speed and sound with words.
Steering Autoregressive Music Generation with Recursive Feature Machines
Machine Learning (CS)
Guides music AI to play specific notes.
MusicGen-Stem: Multi-stem music generation and edition through autoregressive modeling
Sound
Makes music by mixing bass, drums, and more.