Score: 3

One Joke to Rule them All? On the (Im)possibility of Generalizing Humor

Published: August 26, 2025 | arXiv ID: 2508.19402v1

By: Mor Turgeman, Chen Shani, Dafna Shahaf

BigTech Affiliations: Stanford University

Potential Business Impact:

Computers learn to understand new jokes.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Humor is a broad and complex form of communication that remains challenging for machines. Despite its broadness, most existing research on computational humor traditionally focused on modeling a specific type of humor. In this work, we wish to understand whether competence on one or more specific humor tasks confers any ability to transfer to novel, unseen types; in other words, is this fragmentation inevitable? This question is especially timely as new humor types continuously emerge in online and social media contexts (e.g., memes, anti-humor, AI fails). If Large Language Models (LLMs) are to keep up with this evolving landscape, they must be able to generalize across humor types by capturing deeper, transferable mechanisms. To investigate this, we conduct a series of transfer learning experiments across four datasets, representing different humor tasks. We train LLMs under varied diversity settings (1-3 datasets in training, testing on a novel task). Experiments reveal that models are capable of some transfer, and can reach up to 75% accuracy on unseen datasets; training on diverse sources improves transferability (1.88-4.05%) with minimal-to-no drop in in-domain performance. Further analysis suggests relations between humor types, with Dad Jokes surprisingly emerging as the best enabler of transfer (but is difficult to transfer to). We release data and code.

Country of Origin
🇮🇱 🇺🇸 United States, Israel

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
Computation and Language