Align 3D Representation and Text Embedding for 3D Content Personalization
By: Qi Song , Ziyuan Luo , Ka Chun Cheung and more
Potential Business Impact:
Changes 3D objects using words, no retraining.
Recent advances in NeRF and 3DGS have significantly enhanced the efficiency and quality of 3D content synthesis. However, efficient personalization of generated 3D content remains a critical challenge. Current 3D personalization approaches predominantly rely on knowledge distillation-based methods, which require computationally expensive retraining procedures. To address this challenge, we propose \textbf{Invert3D}, a novel framework for convenient 3D content personalization. Nowadays, vision-language models such as CLIP enable direct image personalization through aligned vision-text embedding spaces. However, the inherent structural differences between 3D content and 2D images preclude direct application of these techniques to 3D personalization. Our approach bridges this gap by establishing alignment between 3D representations and text embedding spaces. Specifically, we develop a camera-conditioned 3D-to-text inverse mechanism that projects 3D contents into a 3D embedding aligned with text embeddings. This alignment enables efficient manipulation and personalization of 3D content through natural language prompts, eliminating the need for computationally retraining procedures. Extensive experiments demonstrate that Invert3D achieves effective personalization of 3D content. Our work is available at: https://github.com/qsong2001/Invert3D.
Similar Papers
Controllable 3D Object Generation with Single Image Prompt
CV and Pattern Recognition
Creates 3D objects from pictures, not just words.
Feedforward 3D Editing via Text-Steerable Image-to-3D
CV and Pattern Recognition
Lets you change 3D shapes with words.
Directional Textual Inversion for Personalized Text-to-Image Generation
Machine Learning (CS)
Makes AI images match your words better.