Score: 0

Policy Gradient Methods for Information-Theoretic Opacity in Markov Decision Processes

Published: November 4, 2025 | arXiv ID: 2511.02704v1

By: Chongyang Shi , Sumukha Udupa , Michael R. Dorothy and more

Potential Business Impact:

Keeps secrets safe from prying eyes.

Business Areas:
Darknet Internet Services

Opacity, or non-interference, is a property ensuring that an external observer cannot infer confidential information (the "secret") from system observations. We introduce an information-theoretic measure of opacity, which quantifies information leakage using the conditional entropy of the secret given the observer's partial observations in a system modeled as a Markov decision process (MDP). Our objective is to find a control policy that maximizes opacity while satisfying task performance constraints, assuming that an informed observer is aware of the control policy and system dynamics. Specifically, we consider a class of opacity called state-based opacity, where the secret is a propositional formula about the past or current state of the system, and a special case of state-based opacity called language-based opacity, where the secret is defined by a temporal logic formula (LTL) or a regular language recognized by a finite-state automaton. First, we prove that finite-memory policies can outperform Markov policies in optimizing information-theoretic opacity. Second, we develop an algorithm to compute a maximally opaque Markov policy using a primal-dual gradient-based algorithm, and prove its convergence. Since opacity cannot be expressed as a cumulative cost, we develop a novel method to compute the gradient of conditional entropy with respect to policy parameters using observable operators in hidden Markov models. The experimental results validate the effectiveness and optimality of our proposed methods.

Country of Origin
🇺🇸 United States

Page Count
16 pages

Category
Electrical Engineering and Systems Science:
Systems and Control