LangDriveCTRL: Natural Language Controllable Driving Scene Editing with Multi-modal Agents
By: Yun He , Francesco Pittaluga , Ziyu Jiang and more
LangDriveCTRL is a natural-language-controllable framework for editing real-world driving videos to synthesize diverse traffic scenarios. It leverages explicit 3D scene decomposition to represent driving videos as a scene graph, containing static background and dynamic objects. To enable fine-grained editing and realism, it incorporates an agentic pipeline in which an Orchestrator transforms user instructions into execution graphs that coordinate specialized agents and tools. Specifically, an Object Grounding Agent establishes correspondence between free-form text descriptions and target object nodes in the scene graph; a Behavior Editing Agent generates multi-object trajectories from language instructions; and a Behavior Reviewer Agent iteratively reviews and refines the generated trajectories. The edited scene graph is rendered and then refined using a video diffusion tool to address artifacts introduced by object insertion and significant view changes. LangDriveCTRL supports both object node editing (removal, insertion and replacement) and multi-object behavior editing from a single natural-language instruction. Quantitatively, it achieves nearly $2\times$ higher instruction alignment than the previous SoTA, with superior structural preservation, photorealism, and traffic realism. Project page is available at: https://yunhe24.github.io/langdrivectrl/.
Similar Papers
LANGTRAJ: Diffusion Model and Dataset for Language-Conditioned Trajectory Simulation
Machine Learning (CS)
Tests self-driving cars with words, making them safer.
MMDrive: Interactive Scene Understanding Beyond Vision with Multi-representational Fusion
CV and Pattern Recognition
Helps self-driving cars understand 3D driving scenes.
MMDrive: Interactive Scene Understanding Beyond Vision with Multi-representational Fusion
CV and Pattern Recognition
Helps self-driving cars see and understand 3D.