How do Humans and LLMs Process Confusing Code?
By: Youssef Abdelsalam , Norman Peitek , Anna-Maria Maurer and more
Potential Business Impact:
Finds confusing code that bothers humans and AI.
Already today, humans and programming assistants based on large language models (LLMs) collaborate in everyday programming tasks. Clearly, a misalignment between how LLMs and programmers comprehend code can lead to misunderstandings, inefficiencies, low code quality, and bugs. A key question in this space is whether humans and LLMs are confused by the same kind of code. This would not only guide our choices of integrating LLMs in software engineering workflows, but also inform about possible improvements of LLMs. To this end, we conducted an empirical study comparing an LLM to human programmers comprehending clean and confusing code. We operationalized comprehension for the LLM by using LLM perplexity, and for human programmers using neurophysiological responses (in particular, EEG-based fixation-related potentials). We found that LLM perplexity spikes correlate both in terms of location and amplitude with human neurophysiological responses that indicate confusion. This result suggests that LLMs and humans are similarly confused about the code. Based on these findings, we devised a data-driven, LLM-based approach to identify regions of confusion in code that elicit confusion in human programmers.
Similar Papers
Model-Assisted and Human-Guided: Perceptions and Practices of Software Professionals Using LLMs for Coding
Software Engineering
Helps coders build software faster and smarter.
"I Would Have Written My Code Differently'': Beginners Struggle to Understand LLM-Generated Code
Software Engineering
Helps new coders understand computer-written code.
Uncovering Systematic Failures of LLMs in Verifying Code Against Natural Language Specifications
Software Engineering
Computers can't always tell if code matches instructions.