Chapter-Llama: Efficient Chaptering in Hour-Long Videos with LLMs
By: Lucas Ventura , Antoine Yang , Cordelia Schmid and more
Potential Business Impact:
Divides long videos into chapters with titles.
We address the task of video chaptering, i.e., partitioning a long video timeline into semantic units and generating corresponding chapter titles. While relatively underexplored, automatic chaptering has the potential to enable efficient navigation and content retrieval in long-form videos. In this paper, we achieve strong chaptering performance on hour-long videos by efficiently addressing the problem in the text domain with our 'Chapter-Llama' framework. Specifically, we leverage a pretrained large language model (LLM) with large context window, and feed as input (i) speech transcripts and (ii) captions describing video frames, along with their respective timestamps. Given the inefficiency of exhaustively captioning all frames, we propose a lightweight speech-guided frame selection strategy based on speech transcript content, and experimentally demonstrate remarkable advantages. We train the LLM to output timestamps for the chapter boundaries, as well as free-form chapter titles. This simple yet powerful approach scales to processing one-hour long videos in a single forward pass. Our results demonstrate substantial improvements (e.g., 45.3 vs 26.7 F1 score) over the state of the art on the recent VidChapters-7M benchmark. To promote further research, we release our code and models at our project page.
Similar Papers
Video Summarization with Large Language Models
CV and Pattern Recognition
Makes video summaries understand stories better.
ARC-Chapter: Structuring Hour-Long Videos into Navigable Chapters and Hierarchical Summaries
CV and Pattern Recognition
Breaks long videos into easy-to-find chapters.
Unleashing Hour-Scale Video Training for Long Video-Language Understanding
CV and Pattern Recognition
Lets computers understand hour-long videos.