Localized Calibrated Uncertainty in Code Language Models
By: David Gros, Prem Devanbu
Large Language models (LLMs) can generate complicated source code from natural language prompts. However, LLMs can generate output that deviates from what the user wants, requiring supervision and editing. To support this process, we offer techniques to localize where generations might be misaligned from user intent. We first create a dataset of "Minimal Intent Aligning Patches" of repaired LLM generated programs. Each program uses test cases to verify correctness. After creating a dataset of programs, we measure how well various techniques can assign a well-calibrated probability to indicate which parts of code will be edited in a minimal patch (i.e., give a probability that corresponds with empirical odds it is edited). We compare white-box probing (where we propose a technique for efficient arbitrary-span querying), against black-box reflective and self-consistency based approaches. We find probes with a small supervisor model can achieve low calibration error and Brier Skill Score of approx 0.2 estimating edited lines on code generated by models many orders of magnitude larger. We discuss the generalizability of the techniques, and the connections to AI oversight and control, finding a probe trained only on code shows some signs of generalizing to natural language errors if new probability scaling is allowed.
Similar Papers
Multicalibration for LLM-based Code Generation
Software Engineering
Makes AI write better, more trustworthy computer code.
Uncovering Systematic Failures of LLMs in Verifying Code Against Natural Language Specifications
Software Engineering
Computers can't always tell if code matches instructions.
Toward Automated and Trustworthy Scientific Analysis and Visualization with LLM-Generated Code
Software Engineering
AI writes code for scientists' data.