Evaluating Video Models as Simulators of Multi-Person Pedestrian Trajectories
By: Aaron Appelle, Jerome P. Lynch
Potential Business Impact:
Makes computer videos show people walking realistically.
Large-scale video generation models have demonstrated high visual realism in diverse contexts, spurring interest in their potential as general-purpose world simulators. Existing benchmarks focus on individual subjects rather than scenes with multiple interacting people. However, the plausibility of multi-agent dynamics in generated videos remains unverified. We propose a rigorous evaluation protocol to benchmark text-to-video (T2V) and image-to-video (I2V) models as implicit simulators of pedestrian dynamics. For I2V, we leverage start frames from established datasets to enable comparison with a ground truth video dataset. For T2V, we develop a prompt suite to explore diverse pedestrian densities and interactions. A key component is a method to reconstruct 2D bird's-eye view trajectories from pixel-space without known camera parameters. Our analysis reveals that leading models have learned surprisingly effective priors for plausible multi-agent behavior. However, failure modes like merging and disappearing people highlight areas for future improvement.
Similar Papers
Can Image-To-Video Models Simulate Pedestrian Dynamics?
CV and Pattern Recognition
Makes videos of people walking realistically.
VideoVerse: How Far is Your T2V Generator from a World Model?
CV and Pattern Recognition
Tests if AI can make videos that make sense.
AdaViewPlanner: Adapting Video Diffusion Models for Viewpoint Planning in 4D Scenes
CV and Pattern Recognition
Lets computers pick the best camera angles.