A Comparative Study on Positional Encoding for Time-frequency Domain Dual-path Transformer-based Source Separation Models
By: Kohei Saijo, Tetsuji Ogawa
Potential Business Impact:
Separates sounds better, but only for short ones.
In this study, we investigate the impact of positional encoding (PE) on source separation performance and the generalization ability to long sequences (length extrapolation) in Transformer-based time-frequency (TF) domain dual-path models. The length extrapolation capability in TF-domain dual-path models is a crucial factor, as it affects not only their performance on long-duration inputs but also their generalizability to signals with unseen sampling rates. While PE is known to significantly impact length extrapolation, there has been limited research that explores the choice of PEs for TF-domain dual-path models from this perspective. To address this gap, we compare various PE methods using a recent state-of-the-art model, TF-Locoformer, as the base architecture. Our analysis yields the following key findings: (i) When handling sequences that are the same length as or shorter than those seen during training, models with PEs achieve better performance. (ii) However, models without PE exhibit superior length extrapolation. This trend is particularly pronounced when the model contains convolutional layers.
Similar Papers
SeqPE: Transformer with Sequential Position Encoding
Machine Learning (CS)
Helps AI understand longer texts and images.
Impact of Positional Encoding: Clean and Adversarial Rademacher Complexity for Transformers under In-Context Regression
Machine Learning (Stat)
Makes AI models less accurate and more easily fooled.
ExPe: Exact Positional Encodings for Generative Transformer Models with Extrapolating Capabilities
Computation and Language
Lets AI understand longer sentences it hasn't seen.