GeoPredict: Leveraging Predictive Kinematics and 3D Gaussian Geometry for Precise VLA Manipulation
By: Jingjing Qian , Boyao Han , Chen Shi and more
Vision-Language-Action (VLA) models achieve strong generalization in robotic manipulation but remain largely reactive and 2D-centric, making them unreliable in tasks that require precise 3D reasoning. We propose GeoPredict, a geometry-aware VLA framework that augments a continuous-action policy with predictive kinematic and geometric priors. GeoPredict introduces a trajectory-level module that encodes motion history and predicts multi-step 3D keypoint trajectories of robot arms, and a predictive 3D Gaussian geometry module that forecasts workspace geometry with track-guided refinement along future keypoint trajectories. These predictive modules serve exclusively as training-time supervision through depth-based rendering, while inference requires only lightweight additional query tokens without invoking any 3D decoding. Experiments on RoboCasa Human-50, LIBERO, and real-world manipulation tasks show that GeoPredict consistently outperforms strong VLA baselines, especially in geometry-intensive and spatially demanding scenarios.
Similar Papers
GeoAware-VLA: Implicit Geometry Aware Vision-Language-Action Model
Robotics
Robots see better from new angles.
GeoVLA: Empowering 3D Representations in Vision-Language-Action Models
Robotics
Robots understand 3D space to do tasks better.
GeoVLA: Empowering 3D Representations in Vision-Language-Action Models
Robotics
Robots understand 3D space to do tasks better.