Two-Way Garment Transfer: Unified Diffusion Framework for Dressing and Undressing Synthesis
By: Angang Zhang , Fang Deng , Hao Chen and more
Potential Business Impact:
Lets you take clothes off virtual people.
While recent advances in virtual try-on (VTON) have achieved realistic garment transfer to human subjects, its inverse task, virtual try-off (VTOFF), which aims to reconstruct canonical garment templates from dressed humans, remains critically underexplored and lacks systematic investigation. Existing works predominantly treat them as isolated tasks: VTON focuses on garment dressing while VTOFF addresses garment extraction, thereby neglecting their complementary symmetry. To bridge this fundamental gap, we propose the Two-Way Garment Transfer Model (TWGTM), to the best of our knowledge, the first unified framework for joint clothing-centric image synthesis that simultaneously resolves both mask-guided VTON and mask-free VTOFF through bidirectional feature disentanglement. Specifically, our framework employs dual-conditioned guidance from both latent and pixel spaces of reference images to seamlessly bridge the dual tasks. On the other hand, to resolve the inherent mask dependency asymmetry between mask-guided VTON and mask-free VTOFF, we devise a phased training paradigm that progressively bridges this modality gap. Extensive qualitative and quantitative experiments conducted across the DressCode and VITON-HD datasets validate the efficacy and competitive edge of our proposed approach.
Similar Papers
MGT: Extending Virtual Try-Off to Multi-Garment Scenarios
CV and Pattern Recognition
Lets you see clothes on yourself from other photos.
MuGa-VTON: Multi-Garment Virtual Try-On via Diffusion Transformers with Prompt Customization
CV and Pattern Recognition
Lets you try on clothes virtually, perfectly.
Rethinking Garment Conditioning in Diffusion-based Virtual Try-On
CV and Pattern Recognition
Lets you try on clothes virtually with less computer power.