Quantifying Uncertainty in Machine Learning-Based Pervasive Systems: Application to Human Activity Recognition
By: Vladimir Balditsyn , Philippe Lalanda , German Vega and more
The recent convergence of pervasive computing and machine learning has given rise to numerous services, impacting almost all areas of economic and social activity. However, the use of AI techniques precludes certain standard software development practices, which emphasize rigorous testing to ensure the elimination of all bugs and adherence to well-defined specifications. ML models are trained on numerous high-dimensional examples rather than being manually coded. Consequently, the boundaries of their operating range are uncertain, and they cannot guarantee absolute error-free performance. In this paper, we propose to quantify uncertainty in ML-based systems. To achieve this, we propose to adapt and jointly utilize a set of selected techniques to evaluate the relevance of model predictions at runtime. We apply and evaluate these proposals in the highly heterogeneous and evolving domain of Human Activity Recognition (HAR). The results presented demonstrate the relevance of the approach, and we discuss in detail the assistance provided to domain experts.
Similar Papers
Beyond Quantification: Navigating Uncertainty in Professional AI Systems
Human-Computer Interaction
Helps AI show when it's unsure about answers.
Uncertainty-Driven Reliability: Selective Prediction and Trustworthy Deployment in Modern Machine Learning
Machine Learning (CS)
Helps computers know when they are wrong.
Human-AI Collaborative Uncertainty Quantification
Artificial Intelligence
AI helps people make better guesses.