Towards Faithful Reasoning in Comics for Small MLLMs
By: Chengcheng Feng , Haojie Yin , Yucheng Jin and more
Potential Business Impact:
Helps computers understand funny comics and jokes.
Comic-based visual question answering (CVQA) poses distinct challenges to multimodal large language models (MLLMs) due to its reliance on symbolic abstraction, narrative logic, and humor, which differ from conventional VQA tasks. Although Chain-of-Thought (CoT) prompting is widely used to enhance MLLM reasoning, surprisingly, its direct application to CVQA often degrades performance, especially in small-scale models. Our theoretical and empirical analyses reveal that standard CoT in CVQA suffers from state entanglement, spurious transitions, and exploration inefficiency, with small models particularly vulnerable in resource-constrained settings. To address these issues, we propose a novel comic reasoning framework, designed to produce more faithful and transferable reasoning chains in small MLLMs. Specifically, our framework combines modular CoT generation with GRPO-based reinforcement fine-tuning and a novel structured reward. Beyond comic VQA, we further evaluate our approach on a broader class of humor-centric and abstract visual reasoning tasks, including meme understanding and editorial cartoon interpretation. Across five challenging benchmarks, our 3B model outperforms state-of-the-art methods, and plug-in experiments yield an additional average improvement of $\mathbf{12.1\%}$ across different MLLMs.
Similar Papers
Understanding Multi-Agent Reasoning with Large Language Models for Cartoon VQA
CV and Pattern Recognition
Helps computers understand cartoon questions better.
Diagnosing Visual Reasoning: Challenges, Insights, and a Path Forward
CV and Pattern Recognition
Fixes AI seeing things that aren't there.
Analyzing Reasoning Consistency in Large Multimodal Models under Cross-Modal Conflicts
CV and Pattern Recognition
Fixes AI's tendency to ignore facts when thinking.