AutoODD: Agentic Audits via Bayesian Red Teaming in Black-Box Models
By: Rebecca Martin, Jay Patrikar, Sebastian Scherer
Potential Business Impact:
Finds AI mistakes before they cause problems.
Specialized machine learning models, regardless of architecture and training, are susceptible to failures in deployment. With their increasing use in high risk situations, the ability to audit these models by determining their operational design domain (ODD) is crucial in ensuring safety and compliance. However, given the high-dimensional input spaces, this process often requires significant human resources and domain expertise. To alleviate this, we introduce \coolname, an LLM-Agent centric framework for automated generation of semantically relevant test cases to search for failure modes in specialized black-box models. By leveraging LLM-Agents as tool orchestrators, we aim to fit a uncertainty-aware failure distribution model on a learned text-embedding manifold by projecting the high-dimension input space to low-dimension text-embedding latent space. The LLM-Agent is tasked with iteratively building the failure landscape by leveraging tools for generating test-cases to probe the model-under-test (MUT) and recording the response. The agent also guides the search using tools to probe uncertainty estimate on the low dimensional manifold. We demonstrate this process in a simple case using models trained with missing digits on the MNIST dataset and in the real world setting of vision-based intruder detection for aerial vehicles.
Similar Papers
Incremental Validation of Automated Driving Functions using Generic Volumes in Micro- Operational Design Domains
Robotics
Tests self-driving cars for tricky situations.
Out-of-Distribution Detection for Safety Assurance of AI and Autonomous Systems
Artificial Intelligence
Helps self-driving cars spot unexpected dangers.
Beyond Benchmarks: Dynamic, Automatic And Systematic Red-Teaming Agents For Trustworthy Medical Language Models
Machine Learning (CS)
Finds hidden dangers in AI doctors before they hurt patients.