Do LLMs Trust the Code They Write?
By: Francisco Ribeiro , Claudio Spiess , Prem Devanbu and more
Potential Business Impact:
Finds better computer code inside AI.
Despite the effectiveness of large language models (LLMs) for code generation, they often output incorrect code. One reason is that model output probabilities are often not well-correlated with correctness, and reflect only the final output of the generation process. Inspired by findings that LLMs internally encode concepts like truthfulness, this paper explores if LLMs similarly represent code correctness. Specifically, we identify a correctness representation inside LLMs by contrasting the hidden states between pairs of correct and incorrect code for the same programming tasks. By experimenting on four LLMs, we show that exploiting this extracted correctness representation outperforms standard log-likelihood ranking, as well as verbalized model confidence. Furthermore, we explore how this internal correctness signal can be used to select higher-quality code samples, without requiring test execution. Ultimately, this work demonstrates how leveraging internal representations can enhance code generation systems and make LLMs more reliable, thus improving confidence in automatically generated code.
Similar Papers
Is LLM-Generated Code More Maintainable \& Reliable than Human-Written Code?
Software Engineering
Makes computer code easier to fix with fewer bugs.
Uncovering Systematic Failures of LLMs in Verifying Code Against Natural Language Specifications
Software Engineering
Computers can't always tell if code matches instructions.
Toward Automated and Trustworthy Scientific Analysis and Visualization with LLM-Generated Code
Software Engineering
AI writes code for scientists' data.