One Walk is All You Need: Data-Efficient 3D RF Scene Reconstruction with Human Movements
By: Yiheng Bian , Zechen Li , Lanqing Yang and more
Potential Business Impact:
See hidden 3D scenes with just one walk.
Reconstructing 3D Radiance Field (RF) scenes through opaque obstacles is a long-standing goal, yet it is fundamentally constrained by a laborious data acquisition process requiring thousands of static measurements, which treats human motion as noise to be filtered. This work introduces a new paradigm with a core objective: to perform fast, data-efficient, and high-fidelity RF reconstruction of occluded 3D static scenes, using only a single, brief human walk. We argue that this unstructured motion is not noise, but is in fact an information-rich signal available for reconstruction. To achieve this, we design a factorization framework based on composite 3D Gaussian Splatting (3DGS) that learns to model the dynamic effects of human motion from the persistent static scene geometry within a raw RF stream. Trained on just a single 60-second casual walk, our model reconstructs the full static scene with a Structural Similarity Index (SSIM) of 0.96, remarkably outperforming heavily-sampled state-of-the-art (SOTA) by 12%. By transforming the human movements into its valuable signals, our method eliminates the data acquisition bottleneck and paves the way for on-the-fly 3D RF mapping of unseen environments.
Similar Papers
3D Gaussian Representations with Motion Trajectory Field for Dynamic Scene Reconstruction
Robotics
Makes videos show moving things from new angles.
Asset-Driven Sematic Reconstruction of Dynamic Scene with Multi-Human-Object Interactions
CV and Pattern Recognition
Makes 3D models of moving people and things.
On-the-fly Large-scale 3D Reconstruction from Multi-Camera Rigs
CV and Pattern Recognition
Builds 3D worlds from many cameras fast.