Detecting Musical Deepfakes
By: Nick Sunday
Potential Business Impact:
Finds fake music made by computers.
The proliferation of Text-to-Music (TTM) platforms has democratized music creation, enabling users to effortlessly generate high-quality compositions. However, this innovation also presents new challenges to musicians and the broader music industry. This study investigates the detection of AI-generated songs using the FakeMusicCaps dataset by classifying audio as either deepfake or human. To simulate real-world adversarial conditions, tempo stretching and pitch shifting were applied to the dataset. Mel spectrograms were generated from the modified audio, then used to train and evaluate a convolutional neural network. In addition to presenting technical results, this work explores the ethical and societal implications of TTM platforms, arguing that carefully designed detection systems are essential to both protecting artists and unlocking the positive potential of generative AI in music.
Similar Papers
AI-Generated Music Detection and its Challenges
Sound
Finds fake music made by computers.
AI-Assisted Music Production: A User Study on Text-to-Music Models
Audio and Speech Processing
Lets computers make music from your words.
Decoding Musical Origins: Distinguishing Human and AI Composers
Machine Learning (CS)
Tells if music is human or AI-made.