Personalized Image Descriptions from Attention Sequences
By: Ruoyu Xue , Hieu Le , Jingyi Xu and more
Potential Business Impact:
Helps computers describe pictures like you do.
People can view the same image differently: they focus on different regions, objects, and details in varying orders and describe them in distinct linguistic styles. This leads to substantial variability in image descriptions. However, existing models for personalized image description focus on linguistic style alone, with no prior work leveraging individual viewing patterns. We address this gap by explicitly modeling personalized viewing behavior as a core factor in description generation. Our method, DEPER (DEscription-PERception persona encoder), learns a subject embedding that captures both linguistic style and viewing behavior, guided by an auxiliary attention-prediction task. A lightweight adapter aligns these embeddings with a frozen vision-language model, enabling few-shot personalization without retraining. Across four datasets spanning diverse viewing tasks and both short and detailed descriptions, DEPER achieves a 24% average improvement, showing that modeling personalized attention produces more human-aligned and high-quality descriptions. We posit that understanding how people see helps predict what they say; modeling human diversity in perception can improve both performance and human alignment in multimodal systems.
Similar Papers
Per-Query Visual Concept Learning
CV and Pattern Recognition
Teaches computers to draw your specific ideas.
VISTA: Vision-Language Imitation of Situational Thinking and Attention for Human-Like Driver Focus in Dynamic Environments
CV and Pattern Recognition
Predicts where drivers look using words.
Semantic Anchoring for Robust Personalization in Text-to-Image Diffusion Models
CV and Pattern Recognition
Teaches AI to draw your specific things from photos.