Any4D: Unified Feed-Forward Metric 4D Reconstruction
By: Jay Karhade , Nikhil Keetha , Yuchen Zhang and more
Potential Business Impact:
Makes videos show moving 3D objects accurately.
We present Any4D, a scalable multi-view transformer for metric-scale, dense feed-forward 4D reconstruction. Any4D directly generates per-pixel motion and geometry predictions for N frames, in contrast to prior work that typically focuses on either 2-view dense scene flow or sparse 3D point tracking. Moreover, unlike other recent methods for 4D reconstruction from monocular RGB videos, Any4D can process additional modalities and sensors such as RGB-D frames, IMU-based egomotion, and Radar Doppler measurements, when available. One of the key innovations that allows for such a flexible framework is a modular representation of a 4D scene; specifically, per-view 4D predictions are encoded using a variety of egocentric factors (depthmaps and camera intrinsics) represented in local camera coordinates, and allocentric factors (camera extrinsics and scene flow) represented in global world coordinates. We achieve superior performance across diverse setups - both in terms of accuracy (2-3X lower error) and compute efficiency (15X faster), opening avenues for multiple downstream applications.
Similar Papers
Flux4D: Flow-based Unsupervised 4D Reconstruction
CV and Pattern Recognition
Builds 3D worlds from videos in seconds.
Motion4D: Learning 3D-Consistent Motion and Semantics for 4D Scene Understanding
CV and Pattern Recognition
Makes videos show 3D worlds without flickering.
DetAny4D: Detect Anything 4D Temporally in a Streaming RGB Video
CV and Pattern Recognition
Helps self-driving cars see moving objects better.