Mitigating Object and Action Hallucinations in Multimodal LLMs via Self-Augmented Contrastive Alignment
By: Kai-Po Chang , Wei-Yuan Cheng , Chi-Pin Huang and more
Potential Business Impact:
Fixes AI video descriptions to be truthful.
Recent advancement in multimodal LLMs (MLLMs) has demonstrated their remarkable capability to generate descriptive captions for input videos. However, these models suffer from factual inaccuracies in the generated descriptions, causing severe hallucination issues. While prior works have explored alleviating hallucinations for static images, jointly mitigating visual object and temporal action hallucinations for dynamic videos remains a challenging and unsolved task. To tackle this challenge, we propose a Self-Augmented Contrastive Alignment (SANTA) framework for enabling object and action faithfulness by exempting the spurious correlations and enforcing the emphasis on visual facts. SANTA employs a hallucinative self-augmentation scheme to identify the potential hallucinations that lie in the MLLM and transform the original captions to the contrasted negatives. Furthermore, we develop a tracklet-phrase contrastive alignment to match the regional objects and relation-guided actions with their corresponding visual and temporal phrases. Extensive experiments demonstrate that SANTA outperforms existing methods in alleviating object and action hallucinations, yielding superior performance on the hallucination examination benchmarks.
Similar Papers
Mitigating Multimodal Hallucinations via Gradient-based Self-Reflection
CV and Pattern Recognition
Stops AI from making up fake images.
Mitigating Hallucinations in Multimodal LLMs via Object-aware Preference Optimization
CV and Pattern Recognition
Makes AI stop making up fake answers.
SEASON: Mitigating Temporal Hallucination in Video Large Language Models via Self-Diagnostic Contrastive Decoding
CV and Pattern Recognition
Fixes videos so AI understands time better.