Semantic-Aware Confidence Calibration for Automated Audio Captioning
By: Lucas Dunker , Sai Akshay Menta , Snigdha Mohana Addepalli and more
Potential Business Impact:
Makes sound descriptions more truthful and trustworthy.
Automated audio captioning models frequently produce overconfident predictions regardless of semantic accuracy, limiting their reliability in deployment. This deficiency stems from two factors: evaluation metrics based on n-gram overlap that fail to capture semantic correctness, and the absence of calibrated confidence estimation. We present a framework that addresses both limitations by integrating confidence prediction into audio captioning and redefining correctness through semantic similarity. Our approach augments a Whisper-based audio captioning model with a learned confidence prediction head that estimates uncertainty from decoder hidden states. We employ CLAP audio-text embeddings and sentence transformer similarities (FENSE) to define semantic correctness, enabling Expected Calibration Error (ECE) computation that reflects true caption quality rather than surface-level text overlap. Experiments on Clotho v2 demonstrate that confidence-guided beam search with semantic evaluation achieves dramatically improved calibration (CLAP-based ECE of 0.071) compared to greedy decoding baselines (ECE of 0.488), while simultaneously improving caption quality across standard metrics. Our results establish that semantic similarity provides a more meaningful foundation for confidence calibration in audio captioning than traditional n-gram metrics.
Similar Papers
BRACE: A Benchmark for Robust Audio Caption Quality Evaluation
Sound
Tests how well computers describe sounds.
GrACE: A Generative Approach to Better Confidence Elicitation in Large Language Models
Computation and Language
Makes AI tell you when it's unsure.
Object-Level Verbalized Confidence Calibration in Vision-Language Models via Semantic Perturbation
CV and Pattern Recognition
Makes AI tell you when it's unsure.