Improving AI-generated music with user-guided training
By: Vishwa Mohan Singh , Sai Anirudh Aryasomayajula , Ahan Chatterjee and more
Potential Business Impact:
Teaches AI to make music people like.
AI music generation has advanced rapidly, with models like diffusion and autoregressive algorithms enabling high-fidelity outputs. These tools can alter styles, mix instruments, or isolate them. Since sound can be visualized as spectrograms, image-generation algorithms can be applied to generate novel music. However, these algorithms are typically trained on fixed datasets, which makes it challenging for them to interpret and respond to user input accurately. This is especially problematic because music is highly subjective and requires a level of personalization that image generation does not provide. In this work, we propose a human-computation approach to gradually improve the performance of these algorithms based on user interactions. The human-computation element involves aggregating and selecting user ratings to use as the loss function for fine-tuning the model. We employ a genetic algorithm that incorporates user feedback to enhance the baseline performance of a model initially trained on a fixed dataset. The effectiveness of this approach is measured by the average increase in user ratings with each iteration. In the pilot test, the first iteration showed an average rating increase of 0.2 compared to the baseline. The second iteration further improved upon this, achieving an additional increase of 0.39 over the first iteration.
Similar Papers
Aligning Generative Music AI with Human Preferences: Methods and Challenges
Sound
AI makes music that people actually like.
Exploring listeners' perceptions of AI-generated and human-composed music for functional emotional applications
Human-Computer Interaction
People like AI music more, even if they think it's human.
Augmenting Online Meetings with Context-Aware Real-time Music Generation
Human-Computer Interaction
AI music makes online meetings more relaxing and focused.