Who Gets Heard? Rethinking Fairness in AI for Music Systems
Authors: Atharva Mehta, Shivam Chauhan, Megha Sharma, Gus Xia, Kaustuv Kanti Ganguli, Nishanth Chandran, Zeerak Talat, Monojit Choudhury
Published: 2025-11-08 10:03:22+00:00
Comment: 7 pages, Accepted at NeurIPS'25 workshop on AI for Music
AI Summary
This paper raises concerns about cultural and genre biases in AI for music systems (music-AI systems), particularly how these biases misrepresent marginalized traditions and reduce creators' trust. It highlights the harms of such biases, including cultural erosion and limited creativity, affecting stakeholders like creators, distributors, and listeners. The authors propose recommendations at the dataset, model, and interface levels to address these issues and promote fairness.
Abstract
In recent years, the music research community has examined risks of AI models for music, with generative AI models in particular, raised concerns about copyright, deepfakes, and transparency. In our work, we raise concerns about cultural and genre biases in AI for music systems (music-AI systems) which affect stakeholders including creators, distributors, and listeners shaping representation in AI for music. These biases can misrepresent marginalized traditions, especially from the Global South, producing inauthentic outputs (e.g., distorted ragas) that reduces creators' trust on these systems. Such harms risk reinforcing biases, limiting creativity, and contributing to cultural erasure. To address this, we offer recommendations at dataset, model and interface level in music-AI systems.