ProLAP: Probabilistic Language-Audio Pre-Training
By: Toranosuke Manabe , Yuchi Ishikawa , Hokuto Munakata and more
Potential Business Impact:
Helps computers understand sounds and words better.
Language-audio joint representation learning frameworks typically depend on deterministic embeddings, assuming a one-to-one correspondence between audio and text. In real-world settings, however, the language-audio relationship is inherently many-to-many: one audio segment can be described by multiple captions and vice versa. To address this, we propose Probabilistic Language-Audio Pre-training (ProLAP), which models multiplicity as the spread of probability distributions in a joint language-audio embedding space. To train the intra-modal hierarchical relationship effectively, we also introduce two objectives: (i) hierarchical inclusion loss to promote semantic hierarchical understanding of inputs and (ii) mask repulsive loss to improve the efficiency of learning when optimizing the hierarchical inclusion loss. With this training strategy, our model can learn the hierarchical structure inherent in the data even from small datasets, in contrast to prior probabilistic approaches that rely on large-scale datasets. In our experiments, ProLAP outperforms existing deterministic approaches on audio-text retrieval tasks. Moreover, through experiments on the audio traversal task introduced in this paper, we demonstrate that ProLAP captures the plausible semantic hierarchy.
Similar Papers
Revisiting Audio-language Pretraining for Learning General-purpose Audio Representation
Audio and Speech Processing
Teaches computers to understand all sounds.
SLAP: Learning Speaker and Health-Related Representations from Natural Language Supervision
Audio and Speech Processing
Lets computers understand health from voices.
Bridging Language Gaps: Enhancing Few-Shot Language Adaptation
Computation and Language
Helps computers learn many languages with less data.