DS-VTON: High-Quality Virtual Try-on via Disentangled Dual-Scale Generation
By: Xianbing Sun , Yan Hong , Jiahui Zhan and more
Potential Business Impact:
Lets you try on clothes virtually, perfectly.
Despite recent progress, most existing virtual try-on methods still struggle to simultaneously address two core challenges: accurately aligning the garment image with the target human body, and preserving fine-grained garment textures and patterns. In this paper, we propose DS-VTON, a dual-scale virtual try-on framework that explicitly disentangles these objectives for more effective modeling. DS-VTON consists of two stages: the first stage generates a low-resolution try-on result to capture the semantic correspondence between garment and body, where reduced detail facilitates robust structural alignment. The second stage introduces a residual-guided diffusion process that reconstructs high-resolution outputs by refining the residual between the two scales, focusing on texture fidelity. In addition, our method adopts a fully mask-free generation paradigm, eliminating reliance on human parsing maps or segmentation masks. By leveraging the semantic priors embedded in pretrained diffusion models, this design more effectively preserves the person's appearance and geometric consistency. Extensive experiments demonstrate that DS-VTON achieves state-of-the-art performance in both structural alignment and texture preservation across multiple standard virtual try-on benchmarks.
Similar Papers
Diffusion Model-Based Size Variable Virtual Try-On Technology and Evaluation Method
Multimedia
Lets you try on clothes in different sizes online.
3DV-TON: Textured 3D-Guided Consistent Video Try-on via Diffusion Models
CV and Pattern Recognition
Lets you try on clothes in videos realistically.
Undress to Redress: A Training-Free Framework for Virtual Try-On
CV and Pattern Recognition
Lets you try on clothes virtually, even short sleeves.