Understanding Virality: A Rubric based Vision-Language Model Framework for Short-Form Edutainment Evaluation
By: Arnav Gupta , Gurekas Singh Sahney , Hardik Rathi and more
Potential Business Impact:
Helps videos get more likes by understanding what people like.
Evaluating short-form video content requires moving beyond surface-level quality metrics toward human-aligned, multimodal reasoning. While existing frameworks like VideoScore-2 assess visual and semantic fidelity, they do not capture how specific audiovisual attributes drive real audience engagement. In this work, we propose a data-driven evaluation framework that uses Vision-Language Models (VLMs) to extract unsupervised audiovisual features, clusters them into interpretable factors, and trains a regression-based evaluator to predict engagement on short-form edutainment videos. Our curated YouTube Shorts dataset enables systematic analysis of how VLM-derived features relate to human engagement behavior. Experiments show strong correlations between predicted and actual engagement, demonstrating that our lightweight, feature-based evaluator provides interpretable and scalable assessments compared to traditional metrics (e.g., SSIM, FID). By grounding evaluation in both multimodal feature importance and human-centered engagement signals, our approach advances toward robust and explainable video understanding.
Similar Papers
Vision Large Language Models Are Good Noise Handlers in Engagement Analysis
CV and Pattern Recognition
Helps computers understand how interested people are.
Enhancing Subsequent Video Retrieval via Vision-Language Models (VLMs)
CV and Pattern Recognition
Find videos faster by understanding their stories.
Engagement Prediction of Short Videos with Large Multimodal Models
CV and Pattern Recognition
Helps videos get more likes by understanding sound.