Temporal-Conditional Referring Video Object Segmentation with Noise-Free Text-to-Video Diffusion Model
By: Ruixin Zhang , Jiaqing Fan , Yifan Liao and more
Potential Business Impact:
Helps computers find objects in videos by text.
Referring Video Object Segmentation (RVOS) aims to segment specific objects in a video according to textual descriptions. We observe that recent RVOS approaches often place excessive emphasis on feature extraction and temporal modeling, while relatively neglecting the design of the segmentation head. In fact, there remains considerable room for improvement in segmentation head design. To address this, we propose a Temporal-Conditional Referring Video Object Segmentation model, which innovatively integrates existing segmentation methods to effectively enhance boundary segmentation capability. Furthermore, our model leverages a text-to-video diffusion model for feature extraction. On top of this, we remove the traditional noise prediction module to avoid the randomness of noise from degrading segmentation accuracy, thereby simplifying the model while improving performance. Finally, to overcome the limited feature extraction capability of the VAE, we design a Temporal Context Mask Refinement (TCMR) module, which significantly improves segmentation quality without introducing complex designs. We evaluate our method on four public RVOS benchmarks, where it consistently achieves state-of-the-art performance.
Similar Papers
Temporal Prompting Matters: Rethinking Referring Video Object Segmentation
CV and Pattern Recognition
Finds specific things in videos using words.
Referring Video Object Segmentation with Cross-Modality Proxy Queries
CV and Pattern Recognition
Helps computers find specific things in videos.
Deforming Videos to Masks: Flow Matching for Referring Video Segmentation
CV and Pattern Recognition
Helps computers find and track objects in videos.