Material-informed Gaussian Splatting for 3D World Reconstruction in a Digital Twin
By: Andy Huynh , João Malheiro Silva , Holger Caesar and more
Potential Business Impact:
Creates digital twins of real places using only cameras.
3D reconstruction for Digital Twins often relies on LiDAR-based methods, which provide accurate geometry but lack the semantics and textures naturally captured by cameras. Traditional LiDAR-camera fusion approaches require complex calibration and still struggle with certain materials like glass, which are visible in images but poorly represented in point clouds. We propose a camera-only pipeline that reconstructs scenes using 3D Gaussian Splatting from multi-view images, extracts semantic material masks via vision models, converts Gaussian representations to mesh surfaces with projected material labels, and assigns physics-based material properties for accurate sensor simulation in modern graphics engines and simulators. This approach combines photorealistic reconstruction with physics-based material assignment, providing sensor simulation fidelity comparable to LiDAR-camera fusion while eliminating hardware complexity and calibration requirements. We validate our camera-only method using an internal dataset from an instrumented test vehicle, leveraging LiDAR as ground truth for reflectivity validation alongside image similarity metrics.
Similar Papers
Material-informed Gaussian Splatting for 3D World Reconstruction in a Digital Twin
CV and Pattern Recognition
Makes virtual worlds look and act real.
Computer vision training dataset generation for robotic environments using Gaussian splatting
CV and Pattern Recognition
Creates realistic fake pictures for robots to learn.
Robust LiDAR-Camera Calibration with 2D Gaussian Splatting
Robotics
Aligns robot eyes and laser scanner perfectly.