Score: 0

ProLAP: Probabilistic Language-Audio Pre-Training

Published: October 21, 2025 | arXiv ID: 2510.18423v1

By: Toranosuke Manabe , Yuchi Ishikawa , Hokuto Munakata and more

Potential Business Impact:

Helps computers understand sounds and words better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Language-audio joint representation learning frameworks typically depend on deterministic embeddings, assuming a one-to-one correspondence between audio and text. In real-world settings, however, the language-audio relationship is inherently many-to-many: one audio segment can be described by multiple captions and vice versa. To address this, we propose Probabilistic Language-Audio Pre-training (ProLAP), which models multiplicity as the spread of probability distributions in a joint language-audio embedding space. To train the intra-modal hierarchical relationship effectively, we also introduce two objectives: (i) hierarchical inclusion loss to promote semantic hierarchical understanding of inputs and (ii) mask repulsive loss to improve the efficiency of learning when optimizing the hierarchical inclusion loss. With this training strategy, our model can learn the hierarchical structure inherent in the data even from small datasets, in contrast to prior probabilistic approaches that rely on large-scale datasets. In our experiments, ProLAP outperforms existing deterministic approaches on audio-text retrieval tasks. Moreover, through experiments on the audio traversal task introduced in this paper, we demonstrate that ProLAP captures the plausible semantic hierarchy.

Page Count
5 pages

Category
Electrical Engineering and Systems Science:
Audio and Speech Processing