Procedural Music Generation Systems in Games
By: Shangxuan Luo, Joshua Reiss
Procedural Music Generation (PMG) is an emerging field that algorithmically creates music content for video games. By leveraging techniques from simple rule-based approaches to advanced machine learning algorithms, PMG has the potential to significantly improve development efficiency, provide richer musical experiences, and enhance player immersion. However, academic prototypes often diverge from applications due to differences in priorities such as novelty, reliability, and allocated resources. This paper bridges the gap between research and applications by presenting a systematic overview of current PMG techniques in both fields, offering a two-aspect taxonomy. Through a comparative analysis, this study identifies key research challenges in algorithm implementation, music quality and game integration. Finally, the paper outlines future research directions, emphasising task-oriented and context-aware design, more comprehensive quality evaluation methods, and improved research tool integration to provide actionable insights for developers, composers, and researchers seeking to advance PMG in game contexts.
Similar Papers
Aligning Generative Music AI with Human Preferences: Methods and Challenges
Sound
AI makes music that people actually like.
A Survey on Evaluation Metrics for Music Generation
Sound
Helps judge if computer-made music sounds good.
Automatic Music Mixing using a Generative Model of Effect Embeddings
Audio and Speech Processing
Makes music sound better automatically.