Score: 0

Hallucination by Code Generation LLMs: Taxonomy, Benchmarks, Mitigation, and Challenges

Published: April 29, 2025 | arXiv ID: 2504.20799v2

By: Yunseo Lee , John Youngeun Song , Dongsun Kim and more

Potential Business Impact:

Finds and fixes mistakes in computer code.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent technical breakthroughs in large language models (LLMs) have enabled them to fluently generate source code. Software developers often leverage both general-purpose and code-specialized LLMs to revise existing code or even generate a whole function from scratch. These capabilities are also beneficial in no-code or low-code contexts, in which one can write programs without a technical background. However, due to their internal design, LLMs are prone to generating hallucinations, which are incorrect, nonsensical, and not justifiable information but difficult to identify its presence. This problem also occurs when generating source code. Once hallucinated code is produced, it is often challenging for users to identify and fix it, especially when such hallucinations can be identified under specific execution paths. As a result, the hallucinated code may remain unnoticed within the codebase. This survey investigates recent studies and techniques relevant to hallucinations generated by CodeLLMs. We categorize the types of hallucinations in the code generated by CodeLLMs, review existing benchmarks and mitigation strategies, and identify open challenges. Based on these findings, this survey outlines further research directions in the detection and removal of hallucinations produced by CodeLLMs.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
15 pages

Category
Computer Science:
Software Engineering