Get Global Guarantees: On the Probabilistic Nature of Perturbation Robustness
By: Wenchuan Mu, Kwan Hui Lim
Potential Business Impact:
Makes AI safer by testing its mistakes.
In safety-critical deep learning applications, robustness measures the ability of neural models that handle imperceptible perturbations in input data, which may lead to potential safety hazards. Existing pre-deployment robustness assessment methods typically suffer from significant trade-offs between computational cost and measurement precision, limiting their practical utility. To address these limitations, this paper conducts a comprehensive comparative analysis of existing robustness definitions and associated assessment methodologies. We propose tower robustness to evaluate robustness, which is a novel, practical metric based on hypothesis testing to quantitatively evaluate probabilistic robustness, enabling more rigorous and efficient pre-deployment assessments. Our extensive comparative evaluation illustrates the advantages and applicability of our proposed approach, thereby advancing the systematic understanding and enhancement of model robustness in safety-critical deep learning applications.
Similar Papers
Probably Approximately Global Robustness Certification
Machine Learning (CS)
Makes AI smarter and safer from mistakes.
Non-Parametric Probabilistic Robustness: A Conservative Metric with Optimized Perturbation Distributions
CV and Pattern Recognition
Makes AI more trustworthy with unknown errors.
Quantifying Robustness: A Benchmarking Framework for Deep Learning Forecasting in Cyber-Physical Systems
Machine Learning (CS)
Makes computer predictions more reliable in factories.