NGD: Neural Gradient Based Deformation for Monocular Garment Reconstruction
By: Soham Dasgupta , Shanthika Naik , Preet Savalia and more
Potential Business Impact:
Makes clothes look real in videos.
Dynamic garment reconstruction from monocular video is an important yet challenging task due to the complex dynamics and unconstrained nature of the garments. Recent advancements in neural rendering have enabled high-quality geometric reconstruction with image/video supervision. However, implicit representation methods that use volume rendering often provide smooth geometry and fail to model high-frequency details. While template reconstruction methods model explicit geometry, they use vertex displacement for deformation, which results in artifacts. Addressing these limitations, we propose NGD, a Neural Gradient-based Deformation method to reconstruct dynamically evolving textured garments from monocular videos. Additionally, we propose a novel adaptive remeshing strategy for modelling dynamically evolving surfaces like wrinkles and pleats of the skirt, leading to high-quality reconstruction. Finally, we learn dynamic texture maps to capture per-frame lighting and shadow effects. We provide extensive qualitative and quantitative evaluations to demonstrate significant improvements over existing SOTA methods and provide high-quality garment reconstructions.
Similar Papers
Relightable and Dynamic Gaussian Avatar Reconstruction from Monocular Video
CV and Pattern Recognition
Creates lifelike digital people that move and change light.
Relightable and Dynamic Gaussian Avatar Reconstruction from Monocular Video
CV and Pattern Recognition
Makes digital people look real in any pose.
SAFT: Shape and Appearance of Fabrics from Template via Differentiable Physical Simulations from Monocular Video
CV and Pattern Recognition
Makes clothes look real in 3D videos.