Learning Verified Monitors for Hidden Markov Models
By: Luko van der Maas, Sebastian Junges
Potential Business Impact:
Makes sure machines don't do dangerous things.
Runtime monitors assess whether a system is in an unsafe state based on a stream of observations. We study the problem where the system is subject to probabilistic uncertainty and described by a hidden Markov model. A stream of observations is then unsafe if the probability of being in an unsafe state is above a threshold. A correct monitor recognizes the set of unsafe observations. The key contribution of this paper is the first correct-by-construction synthesis method for such monitors, represented as finite automata. The contribution combines four ingredients: First, we establish the coNP-hardness of checking whether an automaton is a correct monitor, i.e., a monitor without misclassifications. Second, we provide a reduction that reformulates the search for misclassifications into a standard probabilistic system synthesis problem. Third, we integrate the verification routine into an active automata learning routine to synthesize correct monitors. Fourth, we provide a prototypical implementation that shows the feasibility and limitations of the approach on a series of benchmarks.
Similar Papers
Monitoring of Static Fairness
Machine Learning (CS)
Checks if computer decisions are fair to people.
Automata-less Monitoring via Trace-Checking (Extended Version)
Formal Languages and Automata Theory
Checks computer rules without making big machines.
Time for Timed Monitorability
Formal Languages and Automata Theory
Checks if computer programs will work correctly.