Score: 1

How do Humans and LLMs Process Confusing Code?

Published: August 25, 2025 | arXiv ID: 2508.18547v1

By: Youssef Abdelsalam , Norman Peitek , Anna-Maria Maurer and more

Potential Business Impact:

Finds confusing code that bothers humans and AI.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Already today, humans and programming assistants based on large language models (LLMs) collaborate in everyday programming tasks. Clearly, a misalignment between how LLMs and programmers comprehend code can lead to misunderstandings, inefficiencies, low code quality, and bugs. A key question in this space is whether humans and LLMs are confused by the same kind of code. This would not only guide our choices of integrating LLMs in software engineering workflows, but also inform about possible improvements of LLMs. To this end, we conducted an empirical study comparing an LLM to human programmers comprehending clean and confusing code. We operationalized comprehension for the LLM by using LLM perplexity, and for human programmers using neurophysiological responses (in particular, EEG-based fixation-related potentials). We found that LLM perplexity spikes correlate both in terms of location and amplitude with human neurophysiological responses that indicate confusion. This result suggests that LLMs and humans are similarly confused about the code. Based on these findings, we devised a data-driven, LLM-based approach to identify regions of confusion in code that elicit confusion in human programmers.

Country of Origin
🇩🇪 Germany

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
Software Engineering