Diff-3DCap: Shape Captioning with Diffusion Models
By: Zhenyu Shu , Jiawei Wen , Shiyang Li and more
Potential Business Impact:
Helps computers describe 3D shapes with words.
The task of 3D shape captioning occupies a significant place within the domain of computer graphics and has garnered considerable interest in recent years. Traditional approaches to this challenge frequently depend on the utilization of costly voxel representations or object detection techniques, yet often fail to deliver satisfactory outcomes. To address the above challenges, in this paper, we introduce Diff-3DCap, which employs a sequence of projected views to represent a 3D object and a continuous diffusion model to facilitate the captioning process. More precisely, our approach utilizes the continuous diffusion model to perturb the embedded captions during the forward phase by introducing Gaussian noise and then predicts the reconstructed annotation during the reverse phase. Embedded within the diffusion framework is a commitment to leveraging a visual embedding obtained from a pre-trained visual-language model, which naturally allows the embedding to serve as a guiding signal, eliminating the need for an additional classifier. Extensive results of our experiments indicate that Diff-3DCap can achieve performance comparable to that of the current state-of-the-art methods.
Similar Papers
SGDiff: Scene Graph Guided Diffusion Model for Image Collaborative SegCaptioning
CV and Pattern Recognition
Draw a box, get many picture descriptions.
View-Consistent Diffusion Representations for 3D-Consistent Video Generation
CV and Pattern Recognition
Makes computer-made videos look more real.
3D-Consistent Multi-View Editing by Diffusion Guidance
CV and Pattern Recognition
Makes 3D pictures look right after editing.