Can a Unimodal Language Agent Provide Preferences to Tune a Multimodal Vision-Language Model?
By: Sazia Tabasum Mim , Jack Morris , Manish Dhakal and more
Potential Business Impact:
AI learns to describe pictures better with text help.
To explore a more scalable path for adding multimodal capabilities to existing LLMs, this paper addresses a fundamental question: Can a unimodal LLM, relying solely on text, reason about its own informational needs and provide effective feedback to optimize a multimodal model? To answer this, we propose a method that enables a language agent to give feedback to a vision-language model (VLM) to adapt text generation to the agent's preferences. Our results from different experiments affirm this hypothesis, showing that LLM preference feedback significantly enhances VLM descriptions. Using our proposed method, we find that the VLM can generate multimodal scene descriptions to help the LLM better understand multimodal context, leading to improvements of maximum 13% in absolute accuracy compared to the baseline multimodal approach. Furthermore, a human study validated our AI-driven feedback, showing a 64.6% preference alignment rate between the LLM's choices and human judgments. Extensive experiments provide insights on how and why the method works and its limitations.
Similar Papers
Scaling Large Vision-Language Models for Enhanced Multimodal Comprehension In Biomedical Image Analysis
CV and Pattern Recognition
Helps doctors understand cancer treatment images better.
Enhancing Agentic Autonomous Scientific Discovery with Vision-Language Model Capabilities
Computation and Language
Computers discover science by checking their own work.
Multilingual VLM Training: Adapting an English-Trained VLM to French
Computation and Language
Makes AI understand pictures in many languages.