Improving Personalized Search with Regularized Low-Rank Parameter Updates
By: Fiona Ryan , Josef Sivic , Fabian Caba Heilbron and more
Potential Business Impact:
Teaches computers to find your specific things.
Personalized vision-language retrieval seeks to recognize new concepts (e.g. "my dog Fido") from only a few examples. This task is challenging because it requires not only learning a new concept from a few images, but also integrating the personal and general knowledge together to recognize the concept in different contexts. In this paper, we show how to effectively adapt the internal representation of a vision-language dual encoder model for personalized vision-language retrieval. We find that regularized low-rank adaption of a small set of parameters in the language encoder's final layer serves as a highly effective alternative to textual inversion for recognizing the personal concept while preserving general knowledge. Additionally, we explore strategies for combining parameters of multiple learned personal concepts, finding that parameter addition is effective. To evaluate how well general knowledge is preserved in a finetuned representation, we introduce a metric that measures image retrieval accuracy based on captions generated by a vision language model (VLM). Our approach achieves state-of-the-art accuracy on two benchmarks for personalized image retrieval with natural language queries - DeepFashion2 and ConCon-Chi - outperforming the prior art by 4%-22% on personal retrievals.
Similar Papers
Scaling Down to Scale Up: Towards Operationally-Efficient and Deployable Clinical Models via Cross-Modal Low-Rank Adaptation for Medical Vision-Language Models
CV and Pattern Recognition
Helps doctors find diseases in CT scans faster.
Infusing fine-grained visual knowledge to Vision-Language Models
CV and Pattern Recognition
Keeps AI smart while teaching new skills.
Improving Visual Recommendation on E-commerce Platforms Using Vision-Language Models
Information Retrieval
Finds better products you'll like to buy.