Policy Gradient Methods for Information-Theoretic Opacity in Markov Decision Processes
By: Chongyang Shi , Sumukha Udupa , Michael R. Dorothy and more
Potential Business Impact:
Keeps secrets safe from prying eyes.
Opacity, or non-interference, is a property ensuring that an external observer cannot infer confidential information (the "secret") from system observations. We introduce an information-theoretic measure of opacity, which quantifies information leakage using the conditional entropy of the secret given the observer's partial observations in a system modeled as a Markov decision process (MDP). Our objective is to find a control policy that maximizes opacity while satisfying task performance constraints, assuming that an informed observer is aware of the control policy and system dynamics. Specifically, we consider a class of opacity called state-based opacity, where the secret is a propositional formula about the past or current state of the system, and a special case of state-based opacity called language-based opacity, where the secret is defined by a temporal logic formula (LTL) or a regular language recognized by a finite-state automaton. First, we prove that finite-memory policies can outperform Markov policies in optimizing information-theoretic opacity. Second, we develop an algorithm to compute a maximally opaque Markov policy using a primal-dual gradient-based algorithm, and prove its convergence. Since opacity cannot be expressed as a cumulative cost, we develop a novel method to compute the gradient of conditional entropy with respect to policy parameters using observable operators in hidden Markov models. The experimental results validate the effectiveness and optimality of our proposed methods.
Similar Papers
Opacity problems in multi-energy timed automata
Cryptography and Security
Protects secret information from spies watching time and energy.
To Distill or Decide? Understanding the Algorithmic Trade-off in Partially Observable Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn better from hidden information.
New Insights into the Decidability of Opacity in Timed Automata
Systems and Control
Makes computer security checks easier to decide.