Light Field Based 6DoF Tracking of Previously Unobserved Objects
By: Nikolai Goncharov, James L. Gray, Donald G. Dansereau
Potential Business Impact:
Tracks any object, even shiny ones, without prior training.
Object tracking is an important step in robotics and reautonomous driving pipelines, which has to generalize to previously unseen and complex objects. Existing high-performing methods often rely on pre-captured object views to build explicit reference models, which restricts them to a fixed set of known objects. However, such reference models can struggle with visually complex appearance, reducing the quality of tracking. In this work, we introduce an object tracking method based on light field images that does not depend on a pre-trained model, while being robust to complex visual behavior, such as reflections. We extract semantic and geometric features from light field inputs using vision foundation models and convert them into view-dependent Gaussian splats. These splats serve as a unified object representation, supporting differentiable rendering and pose optimization. We further introduce a light field object tracking dataset containing challenging reflective objects with precise ground truth poses. Experiments demonstrate that our method is competitive with state-of-the-art model-based trackers in these difficult cases, paving the way toward universal object tracking in robotic systems. Code/data available at https://github.com/nagonch/LiFT-6DoF.
Similar Papers
6-DoF Object Tracking with Event-based Optical Flow and Frames
CV and Pattern Recognition
Tracks fast-moving objects with special cameras.
Beyond Frame-wise Tracking: A Trajectory-based Paradigm for Efficient Point Cloud Tracking
CV and Pattern Recognition
Helps robots track moving things better, faster.
Physics-Guided Fusion for Robust 3D Tracking of Fast Moving Small Objects
CV and Pattern Recognition
Spots tiny, fast things in 3D space.