Manboformer: Learning Gaussian Representations via Spatial-temporal Attention Mechanism
By: Ziyue Zhao, Qining Qi, Jianfa Ma
Potential Business Impact:
Helps self-driving cars see better in 3D.
Compared with voxel-based grid prediction, in the field of 3D semantic occupation prediction for autonomous driving, GaussianFormer proposed using 3D Gaussian to describe scenes with sparse 3D semantic Gaussian based on objects is another scheme with lower memory requirements. Each 3D Gaussian function represents a flexible region of interest and its semantic features, which are iteratively refined by the attention mechanism. In the experiment, it is found that the Gaussian function required by this method is larger than the query resolution of the original dense grid network, resulting in impaired performance. Therefore, we consider optimizing GaussianFormer by using unused temporal information. We learn the Spatial-Temporal Self-attention Mechanism from the previous grid-given occupation network and improve it to GaussianFormer. The experiment was conducted with the NuScenes dataset, and the experiment is currently underway.
Similar Papers
GaussianFormer3D: Multi-Modal Gaussian-based Semantic Occupancy Prediction with 3D Deformable Attention
CV and Pattern Recognition
Helps self-driving cars see better in 3D.
TGP: Two-modal occupancy prediction with 3D Gaussian and sparse points for 3D Environment Awareness
CV and Pattern Recognition
Helps cars understand 3D spaces better.
QuadricFormer: Scene as Superquadrics for 3D Semantic Occupancy Prediction
CV and Pattern Recognition
Helps self-driving cars see shapes better, faster.