Score: 2

SceneNAT: Masked Generative Modeling for Language-Guided Indoor Scene Synthesis

Published: January 12, 2026 | arXiv ID: 2601.07218v1

By: Jeongjun Choi, Yeonsoo Park, H. Jin Kim

Potential Business Impact:

Builds 3D rooms from your spoken words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We present SceneNAT, a single-stage masked non-autoregressive Transformer that synthesizes complete 3D indoor scenes from natural language instructions through only a few parallel decoding passes, offering improved performance and efficiency compared to prior state-of-the-art approaches. SceneNAT is trained via masked modeling over fully discretized representations of both semantic and spatial attributes. By applying a masking strategy at both the attribute level and the instance level, the model can better capture intra-object and inter-object structure. To boost relational reasoning, SceneNAT employs a dedicated triplet predictor for modeling the scene's layout and object relationships by mapping a set of learnable relation queries to a sparse set of symbolic triplets (subject, predicate, object). Extensive experiments on the 3D-FRONT dataset demonstrate that SceneNAT achieves superior performance compared to state-of-the-art autoregressive and diffusion baselines in both semantic compliance and spatial arrangement accuracy, while operating with substantially lower computational cost.

Country of Origin
🇰🇷 Korea, Republic of

Repos / Data Links

Page Count
39 pages

Category
Computer Science:
CV and Pattern Recognition