Who Gets Heard? Rethinking Fairness in AI for Music Systems

Authors: Atharva Mehta, Shivam Chauhan, Megha Sharma, Gus Xia, Kaustuv Kanti Ganguli, Nishanth Chandran, Zeerak Talat, Monojit Choudhury

Published: 2025-11-08 10:03:22+00:00

Comment: 7 pages, Accepted at NeurIPS'25 workshop on AI for Music

AI Summary

This paper raises concerns about cultural and genre biases in AI for music systems (music-AI systems), particularly how these biases misrepresent marginalized traditions and reduce creators' trust. It highlights the harms of such biases, including cultural erosion and limited creativity, affecting stakeholders like creators, distributors, and listeners. The authors propose recommendations at the dataset, model, and interface levels to address these issues and promote fairness.

Abstract

In recent years, the music research community has examined risks of AI models for music, with generative AI models in particular, raised concerns about copyright, deepfakes, and transparency. In our work, we raise concerns about cultural and genre biases in AI for music systems (music-AI systems) which affect stakeholders including creators, distributors, and listeners shaping representation in AI for music. These biases can misrepresent marginalized traditions, especially from the Global South, producing inauthentic outputs (e.g., distorted ragas) that reduces creators' trust on these systems. Such harms risk reinforcing biases, limiting creativity, and contributing to cultural erasure. To address this, we offer recommendations at dataset, model and interface level in music-AI systems.


Key findings
The paper finds that cultural and genre biases in music-AI systems lead to misrepresentation and homogenization of music, especially from the Global South, thereby eroding cultural identity and widening economic disparities. These biases stem from technical gaps in data, representation, and interface design, such as flattened genre labels and lack of cultural metadata. To mitigate this, the authors recommend improved dataset documentation, model traceability, genre-diverse evaluation, and culturally sensitive interface designs.
Approach
The authors identify key stakeholders in the music-AI ecosystem and analyze the implications of representational biases on them, such as misrepresentation, homogenization, cultural erosion, and opaque training processes. They detail how design choices and technical challenges contribute to these biases and propose system-, dataset-, and model-level strategies, alongside governance recommendations, to ensure fairness and cultural inclusivity.
Datasets
UNKNOWN
Model(s)
UNKNOWN
Author countries
UAE, India, UK