IL3D: A Large-Scale Indoor Layout Dataset for LLM-Driven 3D Scene Generation
By: Wenxu Zhou , Kaixuan Nie , Hang Du and more
Potential Business Impact:
Builds virtual rooms from words for robots.
In this study, we present IL3D, a large-scale dataset meticulously designed for large language model (LLM)-driven 3D scene generation, addressing the pressing demand for diverse, high-quality training data in indoor layout design. Comprising 27,816 indoor layouts across 18 prevalent room types and a library of 29,215 high-fidelity 3D object assets, IL3D is enriched with instance-level natural language annotations to support robust multimodal learning for vision-language tasks. We establish rigorous benchmarks to evaluate LLM-driven scene generation. Experimental results show that supervised fine-tuning (SFT) of LLMs on IL3D significantly improves generalization and surpasses the performance of SFT on other datasets. IL3D offers flexible multimodal data export capabilities, including point clouds, 3D bounding boxes, multiview images, depth maps, normal maps, and semantic masks, enabling seamless adaptation to various visual tasks. As a versatile and robust resource, IL3D significantly advances research in 3D scene generation and embodied intelligence, by providing high-fidelity scene data to support environment perception tasks of embodied agents.
Similar Papers
Advancing Multimodal LLMs by Large-Scale 3D Visual Instruction Dataset Generation
Graphics
Teaches computers to understand pictures better.
La La LiDAR: Large-Scale Layout Generation from LiDAR Data
CV and Pattern Recognition
Makes self-driving cars "see" better in 3D.
LSD-3D: Large-Scale 3D Driving Scene Generation with Geometry Grounding
CV and Pattern Recognition
Creates 3D driving worlds for robots to learn.