Map2Video: Street View Imagery Driven AI Video Generation
By: Hye-Young Jo , Mose Sakashita , Aditi Mishra and more
AI video generation has lowered barriers to video creation, but current tools still struggle with inconsistency. Filmmakers often find that clips fail to match characters and backgrounds, making it difficult to build coherent sequences. A formative study with filmmakers highlighted challenges in shot composition, character motion, and camera control. We present Map2Video, a street view imagery-driven AI video generation tool grounded in real-world geographies. The system integrates Unity and ComfyUI with the VACE video generation model, as well as OpenStreetMap and Mapillary for street view imagery. Drawing on familiar filmmaking practices such as location scouting and rehearsal, Map2Video enables users to choose map locations, position actors and cameras in street view imagery, sketch movement paths, refine camera motion, and generate spatially consistent videos. We evaluated Map2Video with 12 filmmakers. Compared to an image-to-video baseline, it achieved higher spatial accuracy, required less cognitive effort, and offered stronger controllability for both scene replication and open-ended creative exploration.
Similar Papers
StreetViewAI: Making Street View Accessible Using Context-Aware Multimodal AI
Human-Computer Interaction
Lets blind people explore the world virtually.
StreetViewAI: Making Street View Accessible Using Context-Aware Multimodal AI
Human-Computer Interaction
Lets blind people explore the world virtually.
Simulating the Visual World with Artificial Intelligence: A Roadmap
Artificial Intelligence
Creates realistic videos that act like real worlds.