Singing Timbre Popularity Assessment Based on Multimodal Large Foundation Model
By: Zihao Wang , Ruibin Yuan , Ziqi Geng and more
Potential Business Impact:
Helps computers judge singing quality without a song.
Automated singing assessment is crucial for education and entertainment. However, existing systems face two fundamental limitations: reliance on reference tracks, which stifles creative expression, and the simplification of complex performances into non-diagnostic scores based solely on pitch and rhythm. We advocate for a shift from discriminative to descriptive evaluation, creating a complete ecosystem for reference-free, multi-dimensional assessment. First, we introduce Sing-MD, a large-scale dataset annotated by experts across four dimensions: breath control, timbre quality, emotional expression, and vocal technique. Our analysis reveals significant annotation inconsistencies among experts, challenging the validity of traditional accuracy-based metrics. Second, addressing the memory limitations of Multimodal Large Language Models (MLLMs) in analyzing full-length songs, we propose VocalVerse. This efficient hybrid architecture leverages a lightweight acoustic encoder to model global performance features and long-term dependencies. Third, to address automated metric shortcomings, we establish the H-TPR (Human-in-the-loop Tiered Perceptual Ranking) benchmark, which evaluates a model's ability to generate perceptually valid rankings rather than predicting noisy ground-truth scores.
Similar Papers
Generative Multi-modal Feedback for Singing Voice Synthesis Evaluation
Sound
Helps computers judge singing better with words.
Multidimensional Music Aesthetic Evaluation via Semantically Consistent C-Mixup Augmentation
Sound
Makes music sound better by learning what people like.
Musical Score Understanding Benchmark: Evaluating Large Language Models' Comprehension of Complete Musical Scores
Sound
Helps computers understand music scores like a human.