Dreamcrafter: Immersive Editing of 3D Radiance Fields Through Flexible, Generative Inputs and Outputs
By: Cyrus Vachha , Yixiao Kang , Zach Dive and more
Authoring 3D scenes is a central task for spatial computing applications. Competing visions for lowering existing barriers are (1) focus on immersive, direct manipulation of 3D content or (2) leverage AI techniques that capture real scenes (3D Radiance Fields such as, NeRFs, 3D Gaussian Splatting) and modify them at a higher level of abstraction, at the cost of high latency. We unify the complementary strengths of these approaches and investigate how to integrate generative AI advances into real-time, immersive 3D Radiance Field editing. We introduce Dreamcrafter, a VR-based 3D scene editing system that: (1) provides a modular architecture to integrate generative AI algorithms; (2) combines different levels of control for creating objects, including natural language and direct manipulation; and (3) introduces proxy representations that support interaction during high-latency operations. We contribute empirical findings on control preferences and discuss how generative AI interfaces beyond text input enhance creativity in scene editing and world building.
Similar Papers
Real-time 3D Visualization of Radiance Fields on Light Field Displays
Graphics
Shows 3D worlds smoothly on special screens.
Radiance Fields in XR: A Survey on How Radiance Fields are Envisioned and Addressed for XR Research
Graphics
Makes virtual worlds look super real.
Radiance Fields in XR: A Survey on How Radiance Fields are Envisioned and Addressed for XR Research
Graphics
Makes virtual worlds look real for games.