Feedforward 3D Editing via Text-Steerable Image-to-3D
By: Ziqi Ma , Hongqiao Chen , Yisong Yue and more
Potential Business Impact:
Lets you change 3D shapes with words.
Recent progress in image-to-3D has opened up immense possibilities for design, AR/VR, and robotics. However, to use AI-generated 3D assets in real applications, a critical requirement is the capability to edit them easily. We present a feedforward method, Steer3D, to add text steerability to image-to-3D models, which enables editing of generated 3D assets with language. Our approach is inspired by ControlNet, which we adapt to image-to-3D generation to enable text steering directly in a forward pass. We build a scalable data engine for automatic data generation, and develop a two-stage training recipe based on flow-matching training and Direct Preference Optimization (DPO). Compared to competing methods, Steer3D more faithfully follows the language instruction and maintains better consistency with the original 3D asset, while being 2.4x to 28.5x faster. Steer3D demonstrates that it is possible to add a new modality (text) to steer the generation of pretrained image-to-3D generative models with 100k data. Project website: https://glab-caltech.github.io/steer3d/
Similar Papers
Articulate3D: Zero-Shot Text-Driven 3D Object Posing
CV and Pattern Recognition
Moves 3D objects with just words.
Native 3D Editing with Full Attention
CV and Pattern Recognition
Changes 3D shapes with simple text commands.
A Generative Approach to High Fidelity 3D Reconstruction from Text Data
CV and Pattern Recognition
Turns words into 3D objects.