Enhancing Sa2VA for Referent Video Object Segmentation: 2nd Solution for 7th LSVOS RVOS Track
By: Ran Hong , Feng Lu , Leilei Cao and more
Potential Business Impact:
Finds specific things in videos using words.
Referential Video Object Segmentation (RVOS) aims to segment all objects in a video that match a given natural language description, bridging the gap between vision and language understanding. Recent work, such as Sa2VA, combines Large Language Models (LLMs) with SAM~2, leveraging the strong video reasoning capability of LLMs to guide video segmentation. In this work, we present a training-free framework that substantially improves Sa2VA's performance on the RVOS task. Our method introduces two key components: (1) a Video-Language Checker that explicitly verifies whether the subject and action described in the query actually appear in the video, thereby reducing false positives; and (2) a Key-Frame Sampler that adaptively selects informative frames to better capture both early object appearances and long-range temporal context. Without any additional training, our approach achieves a J&F score of 64.14% on the MeViS test set, ranking 2nd place in the RVOS track of the 7th LSVOS Challenge at ICCV 2025.
Similar Papers
The 1st Solution for 7th LSVOS RVOS Track: SaSaSa2VA
CV and Pattern Recognition
Helps computers find and follow anything you describe.
4th PVUW MeViS 3rd Place Report: Sa2VA
CV and Pattern Recognition
Helps computers find objects in videos using words.
3rd Place Report of LSVOS 2025 MeViS Track: Sa2VA-i: Improving Sa2VA Results with Consistent Training and Inference
CV and Pattern Recognition
Helps computers find objects in videos better.