DGFNet: End-to-End Audio-Visual Source Separation Based on Dynamic Gating Fusion
By: Yinfeng Yu, Shiyu Sun
Potential Business Impact:
Separates sounds from videos better.
Current Audio-Visual Source Separation methods primarily adopt two design strategies. The first strategy involves fusing audio and visual features at the bottleneck layer of the encoder, followed by processing the fused features through the decoder. However, when there is a significant disparity between the two modalities, this approach may lead to the loss of critical information. The second strategy avoids direct fusion and instead relies on the decoder to handle the interaction between audio and visual features. Nonetheless, if the encoder fails to integrate information across modalities adequately, the decoder may be unable to effectively capture the complex relationships between them. To address these issues, this paper proposes a dynamic fusion method based on a gating mechanism that dynamically adjusts the modality fusion degree. This approach mitigates the limitations of solely relying on the decoder and facilitates efficient collaboration between audio and visual features. Additionally, an audio attention module is introduced to enhance the expressive capacity of audio features, thereby further improving model performance. Experimental results demonstrate that our method achieves significant performance improvements on two benchmark datasets, validating its effectiveness and advantages in Audio-Visual Source Separation tasks.
Similar Papers
Audio-Guided Dynamic Modality Fusion with Stereo-Aware Attention for Audio-Visual Navigation
Artificial Intelligence
Helps robots find sounds in noisy places.
GateFusion: Hierarchical Gated Cross-Modal Fusion for Active Speaker Detection
CV and Pattern Recognition
Finds who is talking in videos better.
DTFSal: Audio-Visual Dynamic Token Fusion for Video Saliency Prediction
CV and Pattern Recognition
Helps computers know what's important in videos.