Object-Aware Video Matting with Cross-Frame Guidance
By: Huayu Zhang , Dongyue Wu , Yuanjie Shao and more
Potential Business Impact:
Lets computers cut out people from videos perfectly.
Recently, trimap-free methods have drawn increasing attention in human video matting due to their promising performance. Nevertheless, these methods still suffer from the lack of deterministic foreground-background cues, which impairs their ability to consistently identify and locate foreground targets over time and mine fine-grained details. In this paper, we present a trimap-free Object-Aware Video Matting (OAVM) framework, which can perceive different objects, enabling joint recognition of foreground objects and refinement of edge details. Specifically, we propose an Object-Guided Correction and Refinement (OGCR) module, which employs cross-frame guidance to aggregate object-level instance information into pixel-level detail features, thereby promoting their synergy. Furthermore, we design a Sequential Foreground Merging augmentation strategy to diversify sequential scenarios and enhance capacity of the network for object discrimination. Extensive experiments on recent widely used synthetic and real-world benchmarks demonstrate the state-of-the-art performance of our OAVM with only an initial coarse mask. The code and model will be available.
Similar Papers
Generative Video Matting
CV and Pattern Recognition
Makes videos look real by separating people from backgrounds.
Uncertainty-Guided Face Matting for Occlusion-Aware Face Transformation
CV and Pattern Recognition
Makes face filters work even with hidden faces.
MatAnyone 2: Scaling Video Matting via a Learned Quality Evaluator
CV and Pattern Recognition
Makes computer-cutouts of people in videos perfect.