Score: 1

Get Global Guarantees: On the Probabilistic Nature of Perturbation Robustness

Published: August 26, 2025 | arXiv ID: 2508.19183v1

By: Wenchuan Mu, Kwan Hui Lim

Potential Business Impact:

Makes AI safer by testing its mistakes.

Business Areas:
A/B Testing Data and Analytics

In safety-critical deep learning applications, robustness measures the ability of neural models that handle imperceptible perturbations in input data, which may lead to potential safety hazards. Existing pre-deployment robustness assessment methods typically suffer from significant trade-offs between computational cost and measurement precision, limiting their practical utility. To address these limitations, this paper conducts a comprehensive comparative analysis of existing robustness definitions and associated assessment methodologies. We propose tower robustness to evaluate robustness, which is a novel, practical metric based on hypothesis testing to quantitatively evaluate probabilistic robustness, enabling more rigorous and efficient pre-deployment assessments. Our extensive comparative evaluation illustrates the advantages and applicability of our proposed approach, thereby advancing the systematic understanding and enhancement of model robustness in safety-critical deep learning applications.

Repos / Data Links

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)