Score: 0

A Theory-Inspired Framework for Few-Shot Cross-Modal Sketch Person Re-Identification

Published: November 24, 2025 | arXiv ID: 2511.18677v1

By: Yunpeng Gong , Yongjie Hou , Jiangming Shi and more

Potential Business Impact:

Find people from drawings in photos.

Business Areas:
Image Recognition Data and Analytics, Software

Sketch based person re-identification aims to match hand-drawn sketches with RGB surveillance images, but remains challenging due to significant modality gaps and limited annotated data. To address this, we introduce KTCAA, a theoretically grounded framework for few-shot cross-modal generalization. Motivated by generalization theory, we identify two key factors influencing target domain risk: (1) domain discrepancy, which quantifies the alignment difficulty between source and target distributions; and (2) perturbation invariance, which evaluates the model's robustness to modality shifts. Based on these insights, we propose two components: (1) Alignment Augmentation (AA), which applies localized sketch-style transformations to simulate target distributions and facilitate progressive alignment; and (2) Knowledge Transfer Catalyst (KTC), which enhances invariance by introducing worst-case perturbations and enforcing consistency. These modules are jointly optimized under a meta-learning paradigm that transfers alignment knowledge from data-rich RGB domains to sketch-based scenarios. Experiments on multiple benchmarks demonstrate that KTCAA achieves state-of-the-art performance, particularly in data-scarce conditions.

Country of Origin
🇨🇳 China

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition