Pitch-Conditioned Instrument Sound Synthesis From an Interactive Timbre Latent Space
By: Christian Limberg , Fares Schulz , Zhe Zhang and more
Potential Business Impact:
Makes music sound real and easy to control.
This paper presents a novel approach to neural instrument sound synthesis using a two-stage semi-supervised learning framework capable of generating pitch-accurate, high-quality music samples from an expressive timbre latent space. Existing approaches that achieve sufficient quality for music production often rely on high-dimensional latent representations that are difficult to navigate and provide unintuitive user experiences. We address this limitation through a two-stage training paradigm: first, we train a pitch-timbre disentangled 2D representation of audio samples using a Variational Autoencoder; second, we use this representation as conditioning input for a Transformer-based generative model. The learned 2D latent space serves as an intuitive interface for navigating and exploring the sound landscape. We demonstrate that the proposed method effectively learns a disentangled timbre space, enabling expressive and controllable audio generation with reliable pitch conditioning. Experimental results show the model's ability to capture subtle variations in timbre while maintaining a high degree of pitch accuracy. The usability of our method is demonstrated in an interactive web application, highlighting its potential as a step towards future music production environments that are both intuitive and creatively empowering: https://pgesam.faresschulz.com
Similar Papers
Latent Multi-view Learning for Robust Environmental Sound Representations
Sound
Helps computers understand sounds better.
A Controllable Perceptual Feature Generative Model for Melody Harmonization via Conditional Variational Autoencoder
Sound
Creates new music with feeling and style.
Learning to Upsample and Upmix Audio in the Latent Domain
Sound
Makes audio editing much faster and easier.