Deep Polycuboid Fitting for Compact 3D Representation of Indoor Scenes
By: Gahye Lee , Hyejeong Yoon , Jungeon Kim and more
Potential Business Impact:
Maps rooms using simple shapes for virtual tours.
This paper presents a novel framework for compactly representing a 3D indoor scene using a set of polycuboids through a deep learning-based fitting method. Indoor scenes mainly consist of man-made objects, such as furniture, which often exhibit rectilinear geometry. This property allows indoor scenes to be represented using combinations of polycuboids, providing a compact representation that benefits downstream applications like furniture rearrangement. Our framework takes a noisy point cloud as input and first detects six types of cuboid faces using a transformer network. Then, a graph neural network is used to validate the spatial relationships of the detected faces to form potential polycuboids. Finally, each polycuboid instance is reconstructed by forming a set of boxes based on the aggregated face labels. To train our networks, we introduce a synthetic dataset encompassing a diverse range of cuboid and polycuboid shapes that reflect the characteristics of indoor scenes. Our framework generalizes well to real-world indoor scene datasets, including Replica, ScanNet, and scenes captured with an iPhone. The versatility of our method is demonstrated through practical applications, such as virtual room tours and scene editing.
Similar Papers
CasaGPT: Cuboid Arrangement and Scene Assembly for Interior Design
CV and Pattern Recognition
Creates realistic 3D rooms from simple shapes.
PixCuboid: Room Layout Estimation from Multi-view Featuremetric Alignment
CV and Pattern Recognition
Maps rooms using many pictures for better understanding.
PointCubeNet: 3D Part-level Reasoning with 3x3x3 Point Cloud Blocks
CV and Pattern Recognition
Teaches computers to see object parts without labels.