Score: 2

Calibrating Generative Models

Published: October 11, 2025 | arXiv ID: 2510.10020v1

By: Henry D. Smith, Nathaniel L. Diamant, Brian L. Trippe

BigTech Affiliations: Stanford University

Potential Business Impact:

Makes AI more honest about what it knows.

Business Areas:
Simulation Software

Generative models frequently suffer miscalibration, wherein class probabilities and other statistics of the sampling distribution deviate from desired values. We frame calibration as a constrained optimization problem and seek the closest model in Kullback-Leibler divergence satisfying calibration constraints. To address the intractability of imposing these constraints exactly, we introduce two surrogate objectives for fine-tuning: (1) the relax loss, which replaces the constraint with a miscalibration penalty, and (2) the reward loss, which converts calibration into a reward fine-tuning problem. We demonstrate that these approaches substantially reduce calibration error across hundreds of simultaneous constraints and models with up to one billion parameters, spanning applications in protein design, image generation, and language modeling.

Country of Origin
🇺🇸 United States


Page Count
43 pages

Category
Statistics:
Machine Learning (Stat)