QvTAD: Differential Relative Attribute Learning for Voice Timbre Attribute Detection
By: Zhiyu Wu , Jingyi Fang , Yufei Tang and more
Potential Business Impact:
Makes computer voices sound more like real people.
Voice Timbre Attribute Detection (vTAD) plays a pivotal role in fine-grained timbre modeling for speech generation tasks. However, it remains challenging due to the inherently subjective nature of timbre descriptors and the severe label imbalance in existing datasets. In this work, we present QvTAD, a novel pairwise comparison framework based on differential attention, designed to enhance the modeling of perceptual timbre attributes. To address the label imbalance in the VCTK-RVA dataset, we introduce a graph-based data augmentation strategy that constructs a Directed Acyclic Graph and employs Disjoint-Set Union techniques to automatically mine unobserved utterance pairs with valid attribute comparisons. Our framework leverages speaker embeddings from a pretrained FACodec, and incorporates a Relative Timbre Shift-Aware Differential Attention module. This module explicitly models attribute-specific contrasts between paired utterances via differential denoising and contrast amplification mechanisms. Experimental results on the VCTK-RVA benchmark demonstrate that QvTAD achieves substantial improvements across multiple timbre descriptors, with particularly notable gains in cross-speaker generalization scenarios.
Similar Papers
Introducing voice timbre attribute detection
Sound
Helps computers tell voices apart by sound.
The Voice Timbre Attribute Detection 2025 Challenge Evaluation Plan
Sound
Helps computers describe voices like humans do.
The First Voice Timbre Attribute Detection Challenge
Sound
Helps computers understand how voices sound different.