End-to-End 3D Spatiotemporal Perception with Multimodal Fusion and V2X Collaboration
By: Zhenwei Yang, Yibo Ai, Weidong Zhang
Potential Business Impact:
Helps self-driving cars see around corners.
Multi-view cooperative perception and multimodal fusion are essential for reliable 3D spatiotemporal understanding in autonomous driving, especially under occlusions, limited viewpoints, and communication delays in V2X scenarios. This paper proposes XET-V2X, a multi-modal fused end-to-end tracking framework for v2x collaboration that unifies multi-view multimodal sensing within a shared spatiotemporal representation. To efficiently align heterogeneous viewpoints and modalities, XET-V2X introduces a dual-layer spatial cross-attention module based on multi-scale deformable attention. Multi-view image features are first aggregated to enhance semantic consistency, followed by point cloud fusion guided by the updated spatial queries, enabling effective cross-modal interaction while reducing computational overhead. Experiments on the real-world V2X-Seq-SPD dataset and the simulated V2X-Sim-V2V and V2X-Sim-V2I benchmarks demonstrate consistent improvements in detection and tracking performance under varying communication delays. Both quantitative results and qualitative visualizations indicate that XET-V2X achieves robust and temporally stable perception in complex traffic scenarios.
Similar Papers
HeatV2X: Scalable Heterogeneous Collaborative Perception via Efficient Alignment and Interaction
CV and Pattern Recognition
Helps cars share what they see to drive safer.
Research Challenges and Progress in the End-to-End V2X Cooperative Autonomous Driving Competition
Robotics
Helps self-driving cars see around corners.
Research Challenges and Progress in the End-to-End V2X Cooperative Autonomous Driving Competition
Robotics
Helps self-driving cars see around corners.