Score: 0

OmniGen: Unified Multimodal Sensor Generation for Autonomous Driving

Published: December 16, 2025 | arXiv ID: 2512.14225v1

By: Tao Tang , Enhui Ma , xia zhou and more

Potential Business Impact:

Creates realistic driving scenes for self-driving cars.

Business Areas:
Autonomous Vehicles Transportation

Autonomous driving has seen remarkable advancements, largely driven by extensive real-world data collection. However, acquiring diverse and corner-case data remains costly and inefficient. Generative models have emerged as a promising solution by synthesizing realistic sensor data. However, existing approaches primarily focus on single-modality generation, leading to inefficiencies and misalignment in multimodal sensor data. To address these challenges, we propose OminiGen, which generates aligned multimodal sensor data in a unified framework. Our approach leverages a shared Bird\u2019s Eye View (BEV) space to unify multimodal features and designs a novel generalizable multimodal reconstruction method, UAE, to jointly decode LiDAR and multi-view camera data. UAE achieves multimodal sensor decoding through volume rendering, enabling accurate and flexible reconstruction. Furthermore, we incorporate a Diffusion Transformer (DiT) with a ControlNet branch to enable controllable multimodal sensor generation. Our comprehensive experiments demonstrate that OminiGen achieves desired performances in unified multimodal sensor data generation with multimodal consistency and flexible sensor adjustments.

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition