Score: 1

WAVE: Learning Unified & Versatile Audio-Visual Embeddings with Multimodal LLM

Published: September 26, 2025 | arXiv ID: 2509.21990v1

By: Changli Tang , Qinfan Xiao , Ke Mei and more

Potential Business Impact:

Lets computers understand sound, video, and words together.

Business Areas:
Semantic Web Internet Services

While embeddings from multimodal large language models (LLMs) excel as general-purpose representations, their application to dynamic modalities like audio and video remains underexplored. We introduce WAVE (\textbf{u}nified \& \textbf{v}ersatile \textbf{a}udio-\textbf{v}isual \textbf{e}mbeddings), the first LLM-based embedding that creates a unified representation space for text, audio, and video modalities. WAVE employs a novel hierarchical feature fusion strategy and a joint multi-modal, multi-task training approach to enable two key capabilities: any-to-any cross-modal retrieval and the generation of prompt-aware embeddings tailored to user instructions. Experimentally, WAVE sets a new state-of-the-art on the MMEB-v2 video benchmark and achieves superior results in audio and video-to-audio retrieval. Its prompt-aware nature also yields remarkable performance in multimodal question answering, significantly outperforming existing embedding models. Ablation studies validate our joint training strategy, demonstrating improved performance across all modalities. With a newly introduced benchmark for versatile audio-visual learning, WAVE opens up broad possibilities for cross-modal, any-to-any applications. Our code, checkpoints, and data will be released.

Country of Origin
🇨🇳 China

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition