Score: 0

Personalized Image Descriptions from Attention Sequences

Published: December 7, 2025 | arXiv ID: 2512.06662v1

By: Ruoyu Xue , Hieu Le , Jingyi Xu and more

Potential Business Impact:

Helps computers describe pictures like you do.

Business Areas:
Image Recognition Data and Analytics, Software

People can view the same image differently: they focus on different regions, objects, and details in varying orders and describe them in distinct linguistic styles. This leads to substantial variability in image descriptions. However, existing models for personalized image description focus on linguistic style alone, with no prior work leveraging individual viewing patterns. We address this gap by explicitly modeling personalized viewing behavior as a core factor in description generation. Our method, DEPER (DEscription-PERception persona encoder), learns a subject embedding that captures both linguistic style and viewing behavior, guided by an auxiliary attention-prediction task. A lightweight adapter aligns these embeddings with a frozen vision-language model, enabling few-shot personalization without retraining. Across four datasets spanning diverse viewing tasks and both short and detailed descriptions, DEPER achieves a 24% average improvement, showing that modeling personalized attention produces more human-aligned and high-quality descriptions. We posit that understanding how people see helps predict what they say; modeling human diversity in perception can improve both performance and human alignment in multimodal systems.

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition