Score: 3

ReSpace: Text-Driven 3D Scene Synthesis and Editing with Preference Alignment

Published: June 3, 2025 | arXiv ID: 2506.02459v2

By: Martin JJ. Bucher, Iro Armeni

BigTech Affiliations: Stanford University

Potential Business Impact:

Creates and changes 3D rooms with words.

Business Areas:
Virtual Reality Hardware, Software

Scene synthesis and editing has emerged as a promising direction in computer graphics. Current trained approaches for 3D indoor scenes either oversimplify object semantics through one-hot class encodings (e.g., 'chair' or 'table'), require masked diffusion for editing, ignore room boundaries, or rely on floor plan renderings that fail to capture complex layouts. In contrast, LLM-based methods enable richer semantics via natural language (e.g., 'modern studio with light wood furniture') but do not support editing, remain limited to rectangular layouts or rely on weak spatial reasoning from implicit world models. We introduce ReSpace, a generative framework for text-driven 3D indoor scene synthesis and editing using autoregressive language models. Our approach features a compact structured scene representation with explicit room boundaries that frames scene editing as a next-token prediction task. We leverage a dual-stage training approach combining supervised fine-tuning and preference alignment, enabling a specially trained language model for object addition that accounts for user instructions, spatial geometry, object semantics, and scene-level composition. For scene editing, we employ a zero-shot LLM to handle object removal and prompts for addition. We further introduce a novel voxelization-based evaluation that captures fine-grained geometry beyond 3D bounding boxes. Experimental results surpass state-of-the-art on object addition while maintaining competitive results on full scene synthesis.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
CV and Pattern Recognition