Model-Free Assessment of Simulator Fidelity via Quantile Curves
By: Garud Iyengar, Yu-Shiou Willy Lin, Kaizheng Wang
Potential Business Impact:
Measures how well computer simulations match real life.
Simulation of complex systems originated in manufacturing and queuing applications. It is now widely used for large-scale, ML-based systems in research, education, and consumer surveys. However, characterizing the discrepancy between simulators and ground truth remains challenging for increasingly complex, machine-learning-based systems. We propose a computationally tractable method to estimate the quantile function of the discrepancy between the simulated and ground-truth outcome distributions. Our approach focuses on output uncertainty and treats the simulator as a black box, imposing no modeling assumptions on its internals, and hence applies broadly across many parameter families, from Bernoulli and multinomial models to continuous, vector-valued settings. The resulting quantile curve supports confidence interval construction for unseen scenarios, risk-aware summaries of sim-to-real discrepancy (e.g., VaR/CVaR), and comparison of simulators' performance. We demonstrate our methodology in an application assessing LLM simulation fidelity on the WorldValueBench dataset spanning four LLMs.
Similar Papers
How Confident are Video Models? Empowering Video Models to Express their Uncertainty
CV and Pattern Recognition
Helps AI know when it's making fake videos.
Fractional cumulative Residual Inaccuracy in the Quantile Framework and its Appications
Applications
Finds differences between complex systems.
PCS-UQ: Uncertainty Quantification via the Predictability-Computability-Stability Framework
Machine Learning (Stat)
Makes AI predictions more accurate and trustworthy.