A Theory-Inspired Framework for Few-Shot Cross-Modal Sketch Person Re-Identification
By: Yunpeng Gong , Yongjie Hou , Jiangming Shi and more
Potential Business Impact:
Find people from drawings in photos.
Sketch based person re-identification aims to match hand-drawn sketches with RGB surveillance images, but remains challenging due to significant modality gaps and limited annotated data. To address this, we introduce KTCAA, a theoretically grounded framework for few-shot cross-modal generalization. Motivated by generalization theory, we identify two key factors influencing target domain risk: (1) domain discrepancy, which quantifies the alignment difficulty between source and target distributions; and (2) perturbation invariance, which evaluates the model's robustness to modality shifts. Based on these insights, we propose two components: (1) Alignment Augmentation (AA), which applies localized sketch-style transformations to simulate target distributions and facilitate progressive alignment; and (2) Knowledge Transfer Catalyst (KTC), which enhances invariance by introducing worst-case perturbations and enforcing consistency. These modules are jointly optimized under a meta-learning paradigm that transfers alignment knowledge from data-rich RGB domains to sketch-based scenarios. Experiments on multiple benchmarks demonstrate that KTCAA achieves state-of-the-art performance, particularly in data-scarce conditions.
Similar Papers
CKDA: Cross-modality Knowledge Disentanglement and Alignment for Visible-Infrared Lifelong Person Re-identification
CV and Pattern Recognition
Keeps cameras recognizing people all day, night.
Asymmetric Cross-Modal Knowledge Distillation: Bridging Modalities with Weak Semantic Consistency
CV and Pattern Recognition
Teaches computers to learn from different kinds of pictures.
Supervised Contrastive Learning for Few-Shot AI-Generated Image Detection and Attribution
CV and Pattern Recognition
Finds fake pictures made by AI.