Score: 1

Multicalibration for LLM-based Code Generation

Published: December 9, 2025 | arXiv ID: 2512.08810v1

By: Viola Campos, Robin Kuschnereit, Adrian Ulges

Potential Business Impact:

Makes AI write better, more trustworthy computer code.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As AI-based code generation becomes widespread, researchers are investigating the calibration of code LLMs - ensuring their confidence scores faithfully represent the true likelihood of code correctness. To do so, we investigate multicalibration, which can capture additional factors about a coding problem, such as complexity, code length, or programming language used. We study four multicalibration approaches on three function synthesis benchmarks, using latest-generation code LLMs (Qwen3 Coder, GPT-OSS, DeepSeek-R1-Distill). Our results demonstrate that multicalibration can yield distinct improvements over both uncalibrated token likelihoods (+1.03 in skill score) and baseline calibrations (+0.37 in skill score). We study the influence of the aforementioned factors in ablations, and make our dataset (consisting of code generations, likelihoods, and correctness labels) available for future research on code LLM calibration.

Country of Origin
🇩🇪 Germany

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
Software Engineering