Disentangled Concept Representation for Text-to-image Person Re-identification
By: Giyeol Kim, Chanho Eom
Text-to-image person re-identification (TIReID) aims to retrieve person images from a large gallery given free-form textual descriptions. TIReID is challenging due to the substantial modality gap between visual appearances and textual expressions, as well as the need to model fine-grained correspondences that distinguish individuals with similar attributes such as clothing color, texture, or outfit style. To address these issues, we propose DiCo (Disentangled Concept Representation), a novel framework that achieves hierarchical and disentangled cross-modal alignment. DiCo introduces a shared slot-based representation, where each slot acts as a part-level anchor across modalities and is further decomposed into multiple concept blocks. This design enables the disentanglement of complementary attributes (\textit{e.g.}, color, texture, shape) while maintaining consistent part-level correspondence between image and text. Extensive experiments on CUHK-PEDES, ICFG-PEDES, and RSTPReid demonstrate that our framework achieves competitive performance with state-of-the-art methods, while also enhancing interpretability through explicit slot- and block-level representations for more fine-grained retrieval results.
Similar Papers
Hierarchical Prompt Learning for Image- and Text-Based Person Re-Identification
CV and Pattern Recognition
Find people in photos using pictures or words.
Identity Clue Refinement and Enhancement for Visible-Infrared Person Re-Identification
CV and Pattern Recognition
Helps cameras find people in different light.
Language-Guided Visual Perception Disentanglement for Image Quality Assessment and Conditional Image Generation
CV and Pattern Recognition
Helps computers see images better, not just understand them.