Neural USD: An object-centric framework for iterative editing and control
By: Alejandro Escontrela , Shrinu Kushagra , Sjoerd van Steenkiste and more
Potential Business Impact:
Lets you change parts of a picture without messing it up.
Amazing progress has been made in controllable generative modeling, especially over the last few years. However, some challenges remain. One of them is precise and iterative object editing. In many of the current methods, trying to edit the generated image (for example, changing the color of a particular object in the scene or changing the background while keeping other elements unchanged) by changing the conditioning signals often leads to unintended global changes in the scene. In this work, we take the first steps to address the above challenges. Taking inspiration from the Universal Scene Descriptor (USD) standard developed in the computer graphics community, we introduce the "Neural Universal Scene Descriptor" or Neural USD. In this framework, we represent scenes and objects in a structured, hierarchical manner. This accommodates diverse signals, minimizes model-specific constraints, and enables per-object control over appearance, geometry, and pose. We further apply a fine-tuning approach which ensures that the above control signals are disentangled from one another. We evaluate several design considerations for our framework, demonstrating how Neural USD enables iterative and incremental workflows. More information at: https://escontrela.me/neural_usd .
Similar Papers
Uni-Neur2Img: Unified Neural Signal-Guided Image Generation, Editing, and Stylization via Diffusion Transformers
CV and Pattern Recognition
Turns brain waves into pictures.
Real2USD: Scene Representations in Universal Scene Description Language
Robotics
Robots understand tasks by reading scene descriptions.
Neural Scene Designer: Self-Styled Semantic Image Manipulation
CV and Pattern Recognition
Makes edited pictures look like one photo.