Score: 1

UniUGP: Unifying Understanding, Generation, and Planing For End-to-end Autonomous Driving

Published: December 10, 2025 | arXiv ID: 2512.09864v1

By: Hao Lu , Ziyang Liu , Guangfeng Jiang and more

Potential Business Impact:

Helps self-driving cars learn from more videos.

Business Areas:
Autonomous Vehicles Transportation

Autonomous driving (AD) systems struggle in long-tail scenarios due to limited world knowledge and weak visual dynamic modeling. Existing vision-language-action (VLA)-based methods cannot leverage unlabeled videos for visual causal learning, while world model-based methods lack reasoning capabilities from large language models. In this paper, we construct multiple specialized datasets providing reasoning and planning annotations for complex scenarios. Then, a unified Understanding-Generation-Planning framework, named UniUGP, is proposed to synergize scene reasoning, future video generation, and trajectory planning through a hybrid expert architecture. By integrating pre-trained VLMs and video generation models, UniUGP leverages visual dynamics and semantic reasoning to enhance planning performance. Taking multi-frame observations and language instructions as input, it produces interpretable chain-of-thought reasoning, physically consistent trajectories, and coherent future videos. We introduce a four-stage training strategy that progressively builds these capabilities across multiple existing AD datasets, along with the proposed specialized datasets. Experiments demonstrate state-of-the-art performance in perception, reasoning, and decision-making, with superior generalization to challenging long-tail situations.

Page Count
26 pages

Category
Computer Science:
CV and Pattern Recognition