Dynamic Scene Reconstruction: Recent Advance in Real-time Rendering and Streaming
By: Jiaxuan Zhu, Hao Tang
Potential Business Impact:
Creates 3D videos from regular pictures.
Representing and rendering dynamic scenes from 2D images is a fundamental yet challenging problem in computer vision and graphics. This survey provides a comprehensive review of the evolution and advancements in dynamic scene representation and rendering, with a particular emphasis on recent progress in Neural Radiance Fields based and 3D Gaussian Splatting based reconstruction methods. We systematically summarize existing approaches, categorize them according to their core principles, compile relevant datasets, compare the performance of various methods on these benchmarks, and explore the challenges and future research directions in this rapidly evolving field. In total, we review over 170 relevant papers, offering a broad perspective on the state of the art in this domain.
Similar Papers
Advances in Radiance Field for Dynamic Scene: From Neural Field to Gaussian Field
CV and Pattern Recognition
Makes videos look real by understanding movement.
ProDyG: Progressive Dynamic Scene Reconstruction via Gaussian Splatting from Monocular Videos
CV and Pattern Recognition
Builds 3D worlds from videos in real-time.
A Survey of 3D Reconstruction with Event Cameras
CV and Pattern Recognition
Helps robots see in fast, dark, or bright places.