Score: 1

Metamorphic Testing of Large Language Models for Natural Language Processing

Published: November 3, 2025 | arXiv ID: 2511.02108v1

By: Steven Cho, Stefano Ruberto, Valerio Terragni

Potential Business Impact:

Finds mistakes in smart computer language.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Using large language models (LLMs) to perform natural language processing (NLP) tasks has become increasingly pervasive in recent times. The versatile nature of LLMs makes them applicable to a wide range of such tasks. While the performance of recent LLMs is generally outstanding, several studies have shown that they can often produce incorrect results. Automatically identifying these faulty behaviors is extremely useful for improving the effectiveness of LLMs. One obstacle to this is the limited availability of labeled datasets, which necessitates an oracle to determine the correctness of LLM behaviors. Metamorphic testing (MT) is a popular testing approach that alleviates this oracle problem. At the core of MT are metamorphic relations (MRs), which define relationships between the outputs of related inputs. MT can expose faulty behaviors without the need for explicit oracles (e.g., labeled datasets). This paper presents the most comprehensive study of MT for LLMs to date. We conducted a literature review and collected 191 MRs for NLP tasks. We implemented a representative subset (36 MRs) to conduct a series of experiments with three popular LLMs, running approximately 560,000 metamorphic tests. The results shed light on the capabilities and opportunities of MT for LLMs, as well as its limitations.

Country of Origin
🇳🇿 New Zealand

Repos / Data Links

Page Count
13 pages

Category
Computer Science:
Software Engineering