GenTSE: Enhancing Target Speaker Extraction via a Coarse-to-Fine Generative Language Model
By: Haoyang Li , Xuyi Zhuang , Azmat Adnan and more
Language Model (LM)-based generative modeling has emerged as a promising direction for TSE, offering potential for improved generalization and high-fidelity speech. We present GenTSE, a two-stage decoder-only generative LM approach for TSE: Stage-1 predicts coarse semantic tokens, and Stage-2 generates fine acoustic tokens. Separating semantics and acoustics stabilizes decoding and yields more faithful, content-aligned target speech. Both stages use continuous SSL or codec embeddings, offering richer context than discretized-prompt methods. To reduce exposure bias, we employ a Frozen-LM Conditioning training strategy that conditions the LMs on predicted tokens from earlier checkpoints to reduce the gap between teacher-forcing training and autoregressive inference. We further employ DPO to better align outputs with human perceptual preferences. Experiments on Libri2Mix show that GenTSE surpasses previous LM-based systems in speech quality, intelligibility, and speaker consistency.
Similar Papers
Enhancing Intelligibility for Generative Target Speech Extraction via Joint Optimization with Target Speaker ASR
Audio and Speech Processing
Cleans up voices in noisy recordings.
LLaSE-G1: Incentivizing Generalization Capability for LLaMA-based Speech Enhancement
Audio and Speech Processing
Makes voices clearer while keeping their original sound.
LauraTSE: Target Speaker Extraction using Auto-Regressive Decoder-Only Language Models
Machine Learning (CS)
Separates voices from noisy recordings.