Score: 1

Understanding Multi-Agent Reasoning with Large Language Models for Cartoon VQA

Published: January 6, 2026 | arXiv ID: 2601.03073v1

By: Tong Wu, Thanet Markchom

Potential Business Impact:

Helps computers understand cartoon questions better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Visual Question Answering (VQA) for stylised cartoon imagery presents challenges, such as interpreting exaggerated visual abstraction and narrative-driven context, which are not adequately addressed by standard large language models (LLMs) trained on natural images. To investigate this issue, a multi-agent LLM framework is introduced, specifically designed for VQA tasks in cartoon imagery. The proposed architecture consists of three specialised agents: visual agent, language agent and critic agent, which work collaboratively to support structured reasoning by integrating visual cues and narrative context. The framework was systematically evaluated on two cartoon-based VQA datasets: Pororo and Simpsons. Experimental results provide a detailed analysis of how each agent contributes to the final prediction, offering a deeper understanding of LLM-based multi-agent behaviour in cartoon VQA and multimodal inference.

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
CV and Pattern Recognition