Temporally Heterogeneous Graph Contrastive Learning for Multimodal Acoustic event Classification
By: Yuanjian Chen, Yang Xiao, Jinjie Huang
Potential Business Impact:
Helps computers understand sounds and sights together.
Multimodal acoustic event classification plays a key role in audio-visual systems. Although combining audio and visual signals improves recognition, it is still difficult to align them over time and to reduce the effect of noise across modalities. Existing methods often treat audio and visual streams separately, fusing features later with contrastive or mutual information objectives. Recent advances explore multimodal graph learning, but most fail to distinguish between intra- and inter-modal temporal dependencies. To address this, we propose Temporally Heterogeneous Graph-based Contrastive Learning (THGCL). Our framework constructs a temporal graph for each event, where audio and video segments form nodes and their temporal links form edges. We introduce Gaussian processes for intra-modal smoothness, Hawkes processes for inter-modal decay, and contrastive learning to capture fine-grained relationships. Experiments on AudioSet show that THGCL achieves state-of-the-art performance.
Similar Papers
Hybrid Hypergraph Networks for Multimodal Sequence Data Classification
Machine Learning (CS)
Helps computers understand videos with sound better.
Simple and Efficient Heterogeneous Temporal Graph Neural Network
Machine Learning (CS)
Makes computers understand changing online connections faster.
Hybrid Matrix Factorization Based Graph Contrastive Learning for Recommendation System
Information Retrieval
Suggests better movies and products you might like.