Video Killed the Energy Budget: Characterizing the Latency and Power Regimes of Open Text-to-Video Models
By: Julien Delavande, Regis Pierrard, Sasha Luccioni
Potential Business Impact:
Makes videos from words using less power.
Recent advances in text-to-video (T2V) generation have enabled the creation of high-fidelity, temporally coherent clips from natural language prompts. Yet these systems come with significant computational costs, and their energy demands remain poorly understood. In this paper, we present a systematic study of the latency and energy consumption of state-of-the-art open-source T2V models. We first develop a compute-bound analytical model that predicts scaling laws with respect to spatial resolution, temporal length, and denoising steps. We then validate these predictions through fine-grained experiments on WAN2.1-T2V, showing quadratic growth with spatial and temporal dimensions, and linear scaling with the number of denoising steps. Finally, we extend our analysis to six diverse T2V models, comparing their runtime and energy profiles under default settings. Our results provide both a benchmark reference and practical insights for designing and deploying more sustainable generative video systems.
Similar Papers
Bridging Text and Video Generation: A Survey
Graphics
Makes videos from written words.
Diffused Responsibility: Analyzing the Energy Consumption of Generative Text-to-Audio Diffusion Models
Audio and Speech Processing
Makes computer-made sounds use less power.
VideoVerse: How Far is Your T2V Generator from a World Model?
CV and Pattern Recognition
Tests if AI can make videos that make sense.