Self-supervised restoration of singing voice degraded by pitch shifting using shallow diffusion
By: Yunyi Liu, Taketo Akama
Pitch shifting has been an essential feature in singing voice production. However, conventional signal processing approaches exhibit well known trade offs such as formant shifts and robotic coloration that becomes more severe at larger transposition jumps. This paper targets high quality pitch shifting for singing by reframing it as a restoration problem: given an audio track that has been pitch shifted (and thus contaminated by artifacts), we recover a natural sounding performance while preserving its melody and timing. Specifically, we use a lightweight, mel space diffusion model driven by frame level acoustic features such as f0, volume, and content features. We construct training pairs in a self supervised manner by applying pitch shifts and reversing them to simulate realistic artifacts while retaining ground truth. On a curated singing set, the proposed approach substantially reduces pitch shift artifacts compared to representative classical baselines, as measured by both statistical metrics and pairwise acoustic measures. The results suggest that restoration based pitch shifting could be a viable approach towards artifact resistant transposition in vocal production workflows.
Similar Papers
Generating Separated Singing Vocals Using a Diffusion Model Conditioned on Music Mixtures
Sound
Cleans up music to hear just the singer.
DiTSinger: Scaling Singing Voice Synthesis with Diffusion Transformer and Implicit Alignment
Sound
Makes AI sing songs with real-sounding voices.
Efficient and Fast Generative-Based Singing Voice Separation using a Latent Diffusion Model
Sound
Separates singing voice from music perfectly.