Multicalibration for LLM-based Code Generation
By: Viola Campos, Robin Kuschnereit, Adrian Ulges
Potential Business Impact:
Makes AI write better, more trustworthy computer code.
As AI-based code generation becomes widespread, researchers are investigating the calibration of code LLMs - ensuring their confidence scores faithfully represent the true likelihood of code correctness. To do so, we investigate multicalibration, which can capture additional factors about a coding problem, such as complexity, code length, or programming language used. We study four multicalibration approaches on three function synthesis benchmarks, using latest-generation code LLMs (Qwen3 Coder, GPT-OSS, DeepSeek-R1-Distill). Our results demonstrate that multicalibration can yield distinct improvements over both uncalibrated token likelihoods (+1.03 in skill score) and baseline calibrations (+0.37 in skill score). We study the influence of the aforementioned factors in ablations, and make our dataset (consisting of code generations, likelihoods, and correctness labels) available for future research on code LLM calibration.
Similar Papers
The Fools are Certain; the Wise are Doubtful: Exploring LLM Confidence in Code Completion
Software Engineering
Helps computers write code more reliably.
Does In-IDE Calibration of Large Language Models work at Scale?
Software Engineering
Makes computer code suggestions more trustworthy.
A Confidence-Diversity Framework for Calibrating AI Judgement in Accessible Qualitative Coding Tasks
Machine Learning (CS)
Makes AI code faster and more trustworthy.