Score: 0

Decoding Musical Origins: Distinguishing Human and AI Composers

Published: September 14, 2025 | arXiv ID: 2509.11369v1

By: Cheng-Yang Tsai , Tzu-Wei Huang , Shao-Yu Wei and more

Potential Business Impact:

Tells if music is human or AI-made.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

With the rapid advancement of Large Language Models (LLMs), AI-driven music generation has become a vibrant and fruitful area of research. However, the representation of musical data remains a significant challenge. To address this, a novel, machine-learning-friendly music notation system, YNote, was developed. This study leverages YNote to train an effective classification model capable of distinguishing whether a piece of music was composed by a human (Native), a rule-based algorithm (Algorithm Generated), or an LLM (LLM Generated). We frame this as a text classification problem, applying the Term Frequency-Inverse Document Frequency (TF-IDF) algorithm to extract structural features from YNote sequences and using the Synthetic Minority Over-sampling Technique (SMOTE) to address data imbalance. The resulting model achieves an accuracy of 98.25%, successfully demonstrating that YNote retains sufficient stylistic information for analysis. More importantly, the model can identify the unique " technological fingerprints " left by different AI generation techniques, providing a powerful tool for tracing the origins of AI-generated content.

Country of Origin
🇹🇼 Taiwan, Province of China

Page Count
13 pages

Category
Computer Science:
Machine Learning (CS)