Score: 3

Scene-VLM: Multimodal Video Scene Segmentation via Vision-Language Models

Published: December 25, 2025 | arXiv ID: 2512.21778v1

By: Nimrod Berman , Adam Botach , Emanuel Ben-Baruch and more

BigTech Affiliations: Amazon

Potential Business Impact:

Helps computers understand movie stories better.

Business Areas:
Image Recognition Data and Analytics, Software

Segmenting long-form videos into semantically coherent scenes is a fundamental task in large-scale video understanding. Existing encoder-based methods are limited by visual-centric biases, classify each shot in isolation without leveraging sequential dependencies, and lack both narrative understanding and explainability. In this paper, we present Scene-VLM, the first fine-tuned vision-language model (VLM) framework for video scene segmentation. Scene-VLM jointly processes visual and textual cues including frames, transcriptions, and optional metadata to enable multimodal reasoning across consecutive shots. The model generates predictions sequentially with causal dependencies among shots and introduces a context-focus window mechanism to ensure sufficient temporal context for each shot-level decision. In addition, we propose a scheme to extract confidence scores from the token-level logits of the VLM, enabling controllable precision-recall trade-offs that were previously limited to encoder-based methods. Furthermore, we demonstrate that our model can be aligned to generate coherent natural-language rationales for its boundary decisions through minimal targeted supervision. Our approach achieves state-of-the-art performance on standard scene segmentation benchmarks. On MovieNet, for example, Scene-VLM yields significant improvements of +6 AP and +13.7 F1 over the previous leading method.

Country of Origin
🇮🇱 🇺🇸 Israel, United States

Page Count
16 pages

Category
Computer Science:
CV and Pattern Recognition