Dress&Dance: Dress up and Dance as You Like It - Technical Preview
By: Jun-Kun Chen , Aayush Bansal , Minh Phuoc Vo and more
Potential Business Impact:
Lets you try on clothes in a video.
We present Dress&Dance, a video diffusion framework that generates high quality 5-second-long 24 FPS virtual try-on videos at 1152x720 resolution of a user wearing desired garments while moving in accordance with a given reference video. Our approach requires a single user image and supports a range of tops, bottoms, and one-piece garments, as well as simultaneous tops and bottoms try-on in a single pass. Key to our framework is CondNet, a novel conditioning network that leverages attention to unify multi-modal inputs (text, images, and videos), thereby enhancing garment registration and motion fidelity. CondNet is trained on heterogeneous training data, combining limited video data and a larger, more readily available image dataset, in a multistage progressive manner. Dress&Dance outperforms existing open source and commercial solutions and enables a high quality and flexible try-on experience.
Similar Papers
FastFit: Accelerating Multi-Reference Virtual Try-On via Cacheable Diffusion Models
CV and Pattern Recognition
Lets you try on many clothes and items fast.
Eevee: Towards Close-up High-resolution Video-based Virtual Try-on
CV and Pattern Recognition
Makes online clothes look real in videos.
DANCER: Dance ANimation via Condition Enhancement and Rendering with diffusion model
CV and Pattern Recognition
Makes realistic dancing videos from a picture.