Score: 1

Temporally Heterogeneous Graph Contrastive Learning for Multimodal Acoustic event Classification

Published: September 18, 2025 | arXiv ID: 2509.14893v1

By: Yuanjian Chen, Yang Xiao, Jinjie Huang

Potential Business Impact:

Helps computers understand sounds and sights together.

Business Areas:
Image Recognition Data and Analytics, Software

Multimodal acoustic event classification plays a key role in audio-visual systems. Although combining audio and visual signals improves recognition, it is still difficult to align them over time and to reduce the effect of noise across modalities. Existing methods often treat audio and visual streams separately, fusing features later with contrastive or mutual information objectives. Recent advances explore multimodal graph learning, but most fail to distinguish between intra- and inter-modal temporal dependencies. To address this, we propose Temporally Heterogeneous Graph-based Contrastive Learning (THGCL). Our framework constructs a temporal graph for each event, where audio and video segments form nodes and their temporal links form edges. We introduce Gaussian processes for intra-modal smoothness, Hawkes processes for inter-modal decay, and contrastive learning to capture fine-grained relationships. Experiments on AudioSet show that THGCL achieves state-of-the-art performance.

Country of Origin
🇦🇺 Australia

Page Count
5 pages

Category
Computer Science:
Sound