Model See, Model Do? Exposure-Aware Evaluation of Bug-vs-Fix Preference in Code LLMs
By: Ali Al-Kaswan , Claudio Spiess , Prem Devanbu and more
Large language models are increasingly used for code generation and debugging, but their outputs can still contain bugs, that originate from training data. Distinguishing whether an LLM prefers correct code, or a familiar incorrect version might be influenced by what it's been exposed to during training. We introduce an exposure-aware evaluation framework that quantifies how prior exposure to buggy versus fixed code influences a model's preference. Using the ManySStuBs4J benchmark, we apply Data Portraits for membership testing on the Stack-V2 corpus to estimate whether each buggy and fixed variant was seen during training. We then stratify examples by exposure and compare model preference using code completion as well as multiple likelihood-based scoring metrics We find that most examples (67%) have neither variant in the training data, and when only one is present, fixes are more frequently present than bugs. In model generations, models reproduce buggy lines far more often than fixes, with bug-exposed examples amplifying this tendency and fix-exposed examples showing only marginal improvement. In likelihood scoring, minimum and maximum token-probability metrics consistently prefer the fixed code across all conditions, indicating a stable bias toward correct fixes. In contrast, metrics like the Gini coefficient reverse preference when only the buggy variant was seen. Our results indicate that exposure can skew bug-fix evaluations and highlight the risk that LLMs may propagate memorised errors in practice.
Similar Papers
From Model to Breach: Towards Actionable LLM-Generated Vulnerabilities Reporting
Computation and Language
Finds and fixes dangerous code mistakes made by AI.
LLMs are Bug Replicators: An Empirical Study on LLMs' Capability in Completing Bug-prone Code
Software Engineering
Computers struggle to fix buggy code.
LLMs in Code Vulnerability Analysis: A Proof of Concept
Software Engineering
Helps computers find and fix code mistakes.