Score: 0

Diffusion-based G-buffer generation and rendering

Published: March 18, 2025 | arXiv ID: 2503.15147v1

By: Bowen Xue , Giuseppe Claudio Guarnera , Shuang Zhao and more

Potential Business Impact:

Lets you change pictures made by computers.

Business Areas:
GPU Hardware

Despite recent advances in text-to-image generation, controlling geometric layout and material properties in synthesized scenes remains challenging. We present a novel pipeline that first produces a G-buffer (albedo, normals, depth, roughness, and metallic) from a text prompt and then renders a final image through a modular neural network. This intermediate representation enables fine-grained editing: users can copy and paste within specific G-buffer channels to insert or reposition objects, or apply masks to the irradiance channel to adjust lighting locally. As a result, real objects can be seamlessly integrated into virtual scenes, and virtual objects can be placed into real environments with high fidelity. By separating scene decomposition from image rendering, our method offers a practical balance between detailed post-generation control and efficient text-driven synthesis. We demonstrate its effectiveness on a variety of examples, showing that G-buffer editing significantly extends the flexibility of text-guided image generation.

Page Count
10 pages

Category
Computer Science:
Graphics