LVLM-Aided Alignment of Task-Specific Vision Models
By: Alexander Koebler , Lukas Kuhn , Ingo Thon and more
Potential Business Impact:
Makes AI models understand things like people do.
In high-stakes domains, small task-specific vision models are crucial due to their low computational requirements and the availability of numerous methods to explain their results. However, these explanations often reveal that the models do not align well with human domain knowledge, relying instead on spurious correlations. This might result in brittle behavior once deployed in the real-world. To address this issue, we introduce a novel and efficient method for aligning small task-specific vision models with human domain knowledge by leveraging the generalization capabilities of a Large Vision Language Model (LVLM). Our LVLM-Aided Visual Alignment (LVLM-VA) method provides a bidirectional interface that translates model behavior into natural language and maps human class-level specifications to image-level critiques, enabling effective interaction between domain experts and the model. Our method demonstrates substantial improvement in aligning model behavior with human specifications, as validated on both synthetic and real-world datasets. We show that it effectively reduces the model's dependence on spurious features and on group-specific biases, without requiring fine-grained feedback.
Similar Papers
Feedback-Driven Vision-Language Alignment with Minimal Human Supervision
CV and Pattern Recognition
Makes AI understand pictures better with less work.
Toward Automatic Safe Driving Instruction: A Large-Scale Vision Language Model Approach
CV and Pattern Recognition
Helps cars watch drivers and roads for safety.
Improving Alignment in LVLMs with Debiased Self-Judgment
CV and Pattern Recognition
Model judges its own answers to be safer.