Predictive Compensation in Finite-Horizon LQ Games under Gauss-Markov Deviations
By: Navid Mojahed, Mahdis Rabbani, Shima Nazari
Potential Business Impact:
Helps robots learn to fix their own mistakes.
This paper develops a predictive compensation framework for finite-horizon, discrete-time linear quadratic dynamic games subject to Gauss-Markov execution deviations from feedback Nash strategies. One player's control is corrupted by temporally correlated stochastic perturbations modeled as a first-order autoregressive (AR(1)) process, while the opposing player has causal access to past deviations and employs a predictive feedforward strategy that anticipates their future effect. We derive closed-form recursions for mean and covariance propagation under the resulting perturbed closed loop, establish boundedness and sensitivity properties of the equilibrium trajectory, and characterize the reduction in expected cost achieved by optimal predictive compensation. Numerical experiments corroborate the theoretical results and demonstrate performance gains relative to nominal Nash feedback across a range of disturbance persistence levels.
Similar Papers
Predictive Compensation in Finite-Horizon LQ Games under Gauss-Markov Deviations
Systems and Control
Predicts and fixes game player mistakes before they happen.
Identifying Time-varying Costs in Finite-horizon Linear Quadratic Gaussian Games
Systems and Control
Finds hidden goals in decision-making.
Optimal Modified Feedback Strategies in LQ Games under Control Imperfections
CS and Game Theory
Fixes game plans when things go wrong.