Score: 1

Empirical Results for Adjusting Truncated Backpropagation Through Time while Training Neural Audio Effects

Published: December 8, 2025 | arXiv ID: 2512.07393v1

By: Yann Bourdin, Pierrick Legrand, Fanny Roche

Potential Business Impact:

Trains AI to make music sound better, faster.

Business Areas:
A/B Testing Data and Analytics

This paper investigates the optimization of Truncated Backpropagation Through Time (TBPTT) for training neural networks in digital audio effect modeling, with a focus on dynamic range compression. The study evaluates key TBPTT hyperparameters -- sequence number, batch size, and sequence length -- and their influence on model performance. Using a convolutional-recurrent architecture, we conduct extensive experiments across datasets with and without conditionning by user controls. Results demonstrate that carefully tuning these parameters enhances model accuracy and training stability, while also reducing computational demands. Objective evaluations confirm improved performance with optimized settings, while subjective listening tests indicate that the revised TBPTT configuration maintains high perceptual quality.

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)