SpeechLLM-as-Judges: Towards General and Interpretable Speech Quality Evaluation
Authors: Hui Wang, Jinghua Zhao, Yifan Yang, Shujie Liu, Junyang Chen, Yanzhe Zhang, Shiwan Zhao, Jinyu Li, Jiaming Zhou, Haoqin Sun, Yan Lu, Yong Qin
Published: 2025-10-16 13:19:07+00:00
AI Summary
This paper presents SpeechLLM-as-Judges, a novel paradigm leveraging Large Language Models (LLMs) for general, structured, and explanation-based speech quality evaluation across diverse tasks. They introduce SpeechEval, a large-scale multilingual dataset spanning four evaluation tasks, and develop SQ-LLM, a speech-quality-aware LLM trained with Chain-of-Thought (CoT) reasoning and reward optimization. SQ-LLM demonstrates strong performance, interpretability, and generalization across multiple evaluation scenarios, including deepfake detection.
Abstract
Generative speech technologies are progressing rapidly, but evaluating the perceptual quality of synthetic speech remains a core challenge. Existing methods typically rely on scalar scores or binary decisions, which lack interpretability and generalization across tasks and languages. We present SpeechLLM-as-Judges, a new paradigm for enabling large language models (LLMs) to conduct structured and explanation-based speech quality evaluation. To support this direction, we introduce SpeechEval, a large-scale dataset containing 32,207 multilingual speech clips and 128,754 annotations spanning four tasks: quality assessment, pairwise comparison, improvement suggestion, and deepfake detection. Based on this resource, we develop SQ-LLM, a speech-quality-aware LLM trained with chain-of-thought reasoning and reward optimization to improve capability. Experimental results show that SQ-LLM delivers strong performance across tasks and languages, revealing the potential of this paradigm for advancing speech quality evaluation. Relevant resources will be open-sourced.