Who Gets Heard? Rethinking Fairness in AI for Music Systems
By: Atharva Mehta , Shivam Chauhan , Megha Sharma and more
Potential Business Impact:
Fixes AI music to respect all cultures fairly.
In recent years, the music research community has examined risks of AI models for music, with generative AI models in particular, raised concerns about copyright, deepfakes, and transparency. In our work, we raise concerns about cultural and genre biases in AI for music systems (music-AI systems) which affect stakeholders including creators, distributors, and listeners shaping representation in AI for music. These biases can misrepresent marginalized traditions, especially from the Global South, producing inauthentic outputs (e.g., distorted ragas) that reduces creators' trust on these systems. Such harms risk reinforcing biases, limiting creativity, and contributing to cultural erasure. To address this, we offer recommendations at dataset, model and interface level in music-AI systems.
Similar Papers
Towards Responsible AI Music: an Investigation of Trustworthy Features for Creative Systems
Artificial Intelligence
Makes AI art fair and safe for everyone.
Ethics Statements in AI Music Papers: The Effective and the Ineffective
Computers and Society
Helps AI music makers think about right and wrong.
Perception of AI-Generated Music -- The Role of Composer Identity, Personality Traits, Music Preferences, and Perceived Humanness
Human-Computer Interaction
Helps AI understand what people like in music.