Decoding Musical Origins: Distinguishing Human and AI Composers
By: Cheng-Yang Tsai , Tzu-Wei Huang , Shao-Yu Wei and more
Potential Business Impact:
Tells if music is human or AI-made.
With the rapid advancement of Large Language Models (LLMs), AI-driven music generation has become a vibrant and fruitful area of research. However, the representation of musical data remains a significant challenge. To address this, a novel, machine-learning-friendly music notation system, YNote, was developed. This study leverages YNote to train an effective classification model capable of distinguishing whether a piece of music was composed by a human (Native), a rule-based algorithm (Algorithm Generated), or an LLM (LLM Generated). We frame this as a text classification problem, applying the Term Frequency-Inverse Document Frequency (TF-IDF) algorithm to extract structural features from YNote sequences and using the Synthetic Minority Over-sampling Technique (SMOTE) to address data imbalance. The resulting model achieves an accuracy of 98.25%, successfully demonstrating that YNote retains sufficient stylistic information for analysis. More importantly, the model can identify the unique " technological fingerprints " left by different AI generation techniques, providing a powerful tool for tracing the origins of AI-generated content.
Similar Papers
AI-generated Text Detection: A Multifaceted Approach to Binary and Multiclass Classification
Computation and Language
Finds if writing is from a person or AI.
Large Language Models' Internal Perception of Symbolic Music
Computation and Language
Computers learn music from text descriptions.
Detecting Musical Deepfakes
Sound
Finds fake music made by computers.