Score: 3

Who Gets Heard? Rethinking Fairness in AI for Music Systems

Published: November 8, 2025 | arXiv ID: 2511.05953v1

By: Atharva Mehta , Shivam Chauhan , Megha Sharma and more

BigTech Affiliations: Microsoft

Potential Business Impact:

Fixes AI music to respect all cultures fairly.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

In recent years, the music research community has examined risks of AI models for music, with generative AI models in particular, raised concerns about copyright, deepfakes, and transparency. In our work, we raise concerns about cultural and genre biases in AI for music systems (music-AI systems) which affect stakeholders including creators, distributors, and listeners shaping representation in AI for music. These biases can misrepresent marginalized traditions, especially from the Global South, producing inauthentic outputs (e.g., distorted ragas) that reduces creators' trust on these systems. Such harms risk reinforcing biases, limiting creativity, and contributing to cultural erasure. To address this, we offer recommendations at dataset, model and interface level in music-AI systems.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¦πŸ‡ͺ United States, United Arab Emirates

Page Count
7 pages

Category
Computer Science:
Computers and Society