Toward Maturity-Based Certification of Embodied AI: Quantifying Trustworthiness Through Measurement Mechanisms
By: Michael C. Darling , Alan H. Hesu , Michael A. Mardikes and more
Potential Business Impact:
Helps robots prove they are safe and reliable.
We propose a maturity-based framework for certifying embodied AI systems through explicit measurement mechanisms. We argue that certifiable embodied AI requires structured assessment frameworks, quantitative scoring mechanisms, and methods for navigating multi-objective trade-offs inherent in trustworthiness evaluation. We demonstrate this approach using uncertainty quantification as an exemplar measurement mechanism and illustrate feasibility through an Uncrewed Aircraft System (UAS) detection case study.
Similar Papers
The Trust Calibration Maturity Model for Characterizing and Communicating Trustworthiness of AI Systems
Human-Computer Interaction
Helps people know if AI can be trusted.
Perceptual Quality Assessment for Embodied AI
CV and Pattern Recognition
Helps robots understand messy real-world pictures.
Trustworthy Orchestration Artificial Intelligence by the Ten Criteria with Control-Plane Governance
Artificial Intelligence
Makes AI systems trustworthy and understandable.