Rethinking the Spatio-Temporal Alignment of End-to-End 3D Perception
By: Xiaoyu Li , Peidong Li , Xian Wu and more
Potential Business Impact:
Helps self-driving cars see better in bad weather.
Spatio-temporal alignment is crucial for temporal modeling of end-to-end (E2E) perception in autonomous driving (AD), providing valuable structural and textural prior information. Existing methods typically rely on the attention mechanism to align objects across frames, simplifying the motion model with a unified explicit physical model (constant velocity, etc.). These approaches prefer semantic features for implicit alignment, challenging the importance of explicit motion modeling in the traditional perception paradigm. However, variations in motion states and object features across categories and frames render this alignment suboptimal. To address this, we propose HAT, a spatio-temporal alignment module that allows each object to adaptively decode the optimal alignment proposal from multiple hypotheses without direct supervision. Specifically, HAT first utilizes multiple explicit motion models to generate spatial anchors and motion-aware feature proposals for historical instances. It then performs multi-hypothesis decoding by incorporating semantic and motion cues embedded in cached object queries, ultimately providing the optimal alignment proposal for the target frame. On nuScenes, HAT consistently improves 3D temporal detectors and trackers across diverse baselines. It achieves state-of-the-art tracking results with 46.0% AMOTA on the test set when paired with the DETR3D detector. In an object-centric E2E AD method, HAT enhances perception accuracy (+1.3% mAP, +3.1% AMOTA) and reduces the collision rate by 32%. When semantics are corrupted (nuScenes-C), the enhancement of motion modeling by HAT enables more robust perception and planning in the E2E AD.
Similar Papers
SPAN: Spatial-Projection Alignment for Monocular 3D Object Detection
CV and Pattern Recognition
Makes 3D object detection from single images more accurate.
Multi-Domain Enhanced Map-Free Trajectory Prediction with Selective Attention
CV and Pattern Recognition
Helps self-driving cars predict where others will go.
End-to-End 3D Spatiotemporal Perception with Multimodal Fusion and V2X Collaboration
CV and Pattern Recognition
Helps self-driving cars see around corners.