MGPC: Multimodal Network for Generalizable Point Cloud Completion With Modality Dropout and Progressive Decoding
By: Jiangyuan Liu , Hongxuan Ma , Yuhao Zhao and more
Potential Business Impact:
Fills in missing 3D shapes using pictures and words.
Point cloud completion aims to recover complete 3D geometry from partial observations caused by limited viewpoints and occlusions. Existing learning-based works, including 3D Convolutional Neural Network (CNN)-based, point-based, and Transformer-based methods, have achieved strong performance on synthetic benchmarks. However, due to the limitations of modality, scalability, and generative capacity, their generalization to novel objects and real-world scenarios remains challenging. In this paper, we propose MGPC, a generalizable multimodal point cloud completion framework that integrates point clouds, RGB images, and text within a unified architecture. MGPC introduces an innovative modality dropout strategy, a Transformer-based fusion module, and a novel progressive generator to improve robustness, scalability, and geometric modeling capability. We further develop an automatic data generation pipeline and construct MGPC-1M, a large-scale benchmark with over 1,000 categories and one million training pairs. Extensive experiments on MGPC-1M and in-the-wild data demonstrate that the proposed method consistently outperforms prior baselines and exhibits strong generalization under real-world conditions.
Similar Papers
HGACNet: Hierarchical Graph Attention Network for Cross-Modal Point Cloud Completion
Robotics
Helps robots see and grab objects better.
GenPC: Zero-shot Point Cloud Completion via 3D Generative Priors
CV and Pattern Recognition
Fixes messy 3D scans using smart AI.
From Points to Clouds: Learning Robust Semantic Distributions for Multi-modal Prompts
CV and Pattern Recognition
Teaches computers to understand new things better.