Large Language Models' Internal Perception of Symbolic Music
By: Andrew Shin, Kunitake Kaneko
Potential Business Impact:
Computers learn music from text descriptions.
Large language models (LLMs) excel at modeling relationships between strings in natural language and have shown promise in extending to other symbolic domains like coding or mathematics. However, the extent to which they implicitly model symbolic music remains underexplored. This paper investigates how LLMs represent musical concepts by generating symbolic music data from textual prompts describing combinations of genres and styles, and evaluating their utility through recognition and generation tasks. We produce a dataset of LLM-generated MIDI files without relying on explicit musical training. We then train neural networks entirely on this LLM-generated MIDI dataset and perform genre and style classification as well as melody completion, benchmarking their performance against established models. Our results demonstrate that LLMs can infer rudimentary musical structures and temporal relationships from text, highlighting both their potential to implicitly encode musical patterns and their limitations due to a lack of explicit musical context, shedding light on their generative capabilities for symbolic music.
Similar Papers
LLMs Know More Than Words: A Genre Study with Syntax, Metaphor & Phonetics
Computation and Language
Helps computers understand poetry and stories better.
Music Recommendation with Large Language Models: Challenges, Opportunities, and Evaluation
Information Retrieval
Helps music apps pick songs you'll love.
TuneGenie: Reasoning-based LLM agents for preferential music generation
Sound
AI creates music from your taste.