Detection of Deployment Operational Deviations for Safety and Security of AI-Enabled Human-Centric Cyber Physical Systems
By: Bernard Ngabonziza, Ayan Banerjee, Sandeep K. S. Gupta
Potential Business Impact:
Keeps smart medical devices safe from errors.
In recent years, Human-centric cyber-physical systems have increasingly involved artificial intelligence to enable knowledge extraction from sensor-collected data. Examples include medical monitoring and control systems, as well as autonomous cars. Such systems are intended to operate according to the protocols and guidelines for regular system operations. However, in many scenarios, such as closed-loop blood glucose control for Type 1 diabetics, self-driving cars, and monitoring systems for stroke diagnosis. The operations of such AI-enabled human-centric applications can expose them to cases for which their operational mode may be uncertain, for instance, resulting from the interactions with a human with the system. Such cases, in which the system is in uncertain conditions, can violate the system's safety and security requirements. This paper will discuss operational deviations that can lead these systems to operate in unknown conditions. We will then create a framework to evaluate different strategies for ensuring the safety and security of AI-enabled human-centric cyber-physical systems in operation deployment. Then, as an example, we show a personalized image-based novel technique for detecting the non-announcement of meals in closed-loop blood glucose control for Type 1 diabetics.
Similar Papers
Personalized Model-Based Design of Human Centric AI enabled CPS for Long term usage
Artificial Intelligence
Makes AI systems safer for a long time.
Left shifting analysis of Human-Autonomous Team interactions to analyse risks of autonomy in high-stakes AI systems
Human-Computer Interaction
Finds AI mistakes before they cause big problems.
Human-Centered AI and Autonomy in Robotics: Insights from a Bibliometric Study
Robotics
Makes robots work safely with people.