A Conformal Risk Control Framework for Granular Word Assessment and Uncertainty Calibration of CLIPScore Quality Estimates
By: Gonçalo Gomes, Bruno Martins, Chrysoula Zerva
Potential Business Impact:
Helps computers judge picture descriptions better.
This study explores current limitations of learned image captioning evaluation metrics, specifically the lack of granular assessments for errors within captions, and the reliance on single-point quality estimates without considering uncertainty. To address the limitations, we propose a simple yet effective strategy for generating and calibrating distributions of CLIPScore values. Leveraging a model-agnostic conformal risk control framework, we calibrate CLIPScore values for task-specific control variables, tackling the aforementioned limitations. Experimental results demonstrate that using conformal risk control, over score distributions produced with simple methods such as input masking, can achieve competitive performance compared to more complex approaches. Our method effectively detects erroneous words, while providing formal guarantees aligned with desired risk levels. It also improves the correlation between uncertainty estimations and prediction errors, thus enhancing the overall reliability of caption evaluation metrics.
Similar Papers
Conditional Conformal Risk Adaptation
Machine Learning (CS)
Improves medical scans to find problems more reliably.
Selective Conformal Risk Control
Machine Learning (CS)
Makes AI predictions more trustworthy and useful.
Correctness Coverage Evaluation for Medical Multiple-Choice Question Answering Based on the Enhanced Conformal Prediction Framework
Computation and Language
Makes AI answers about health more trustworthy.