Visual Puns from Idioms: An Iterative LLM-T2IM-MLLM Framework
By: Kelaiti Xiao , Liang Yang , Dongyu Zhang and more
Potential Business Impact:
Creates funny pictures from sayings.
We study idiom-based visual puns--images that align an idiom's literal and figurative meanings--and present an iterative framework that coordinates a large language model (LLM), a text-to-image model (T2IM), and a multimodal LLM (MLLM) for automatic generation and evaluation. Given an idiom, the system iteratively (i) generates detailed visual prompts, (ii) synthesizes an image, (iii) infers the idiom from the image, and (iv) refines the prompt until recognition succeeds or a step limit is reached. Using 1,000 idioms as inputs, we synthesize a corresponding dataset of visual pun images with paired prompts, enabling benchmarking of both generation and understanding. Experiments across 10 LLMs, 10 MLLMs, and one T2IM (Qwen-Image) show that MLLM choice is the primary performance driver: GPT achieves the highest accuracies, Gemini follows, and the best open-source MLLM (Gemma) is competitive with some closed models. On the LLM side, Claude attains the strongest average performance for prompt generation.
Similar Papers
Pun Unintended: LLMs and the Illusion of Humor Understanding
Computation and Language
Makes computers understand jokes better.
Pun Unintended: LLMs and the Illusion of Humor Understanding
Computation and Language
Makes computers understand jokes better.
Evaluating LLMs on Chinese Idiom Translation
Computation and Language
Fixes computer translations of tricky Chinese sayings.