Computer vision training dataset generation for robotic environments using Gaussian splatting
By: Patryk Niżeniec, Marcin Iwanowski
Potential Business Impact:
Creates realistic fake pictures for robots to learn.
This paper introduces a novel pipeline for generating large-scale, highly realistic, and automatically labeled datasets for computer vision tasks in robotic environments. Our approach addresses the critical challenges of the domain gap between synthetic and real-world imagery and the time-consuming bottleneck of manual annotation. We leverage 3D Gaussian Splatting (3DGS) to create photorealistic representations of the operational environment and objects. These assets are then used in a game engine where physics simulations create natural arrangements. A novel, two-pass rendering technique combines the realism of splats with a shadow map generated from proxy meshes. This map is then algorithmically composited with the image to add both physically plausible shadows and subtle highlights, significantly enhancing realism. Pixel-perfect segmentation masks are generated automatically and formatted for direct use with object detection models like YOLO. Our experiments show that a hybrid training strategy, combining a small set of real images with a large volume of our synthetic data, yields the best detection and segmentation performance, confirming this as an optimal strategy for efficiently achieving robust and accurate models.
Similar Papers
Synthetic Dataset Generation for Autonomous Mobile Robots Using 3D Gaussian Splatting for Vision Training
Robotics
Makes robots learn to see faster.
Cut-and-Splat: Leveraging Gaussian Splatting for Synthetic Data Generation
CV and Pattern Recognition
Creates realistic fake pictures for training AI.
Novel Demonstration Generation with Gaussian Splatting Enables Robust One-Shot Manipulation
Robotics
Robots learn better from fake 3D scenes.