Stress-Aware Learning under KL Drift via Trust-Decayed Mirror Descent
By: Gabriel Nixon Raj
Potential Business Impact:
Helps computers learn when rules change suddenly.
We study sequential decision-making under distribution drift. We propose entropy-regularized trust-decay, which injects stress-aware exponential tilting into both belief updates and mirror-descent decisions. On the simplex, a Fenchel-dual equivalence shows that belief tilt and decision tilt coincide. We formalize robustness via fragility (worst-case excess risk in a KL ball), belief bandwidth (radius sustaining a target excess), and a decision-space Fragility Index (drift tolerated at $O(\sqrt{T})$ regret). We prove high-probability sensitivity bounds and establish dynamic-regret guarantees of $\tilde{O}(\sqrt{T})$ under KL-drift path length $S_T = \sum_{t\ge2}\sqrt{{\rm KL}(D_t|D_{t-1})/2}$. In particular, trust-decay achieves $O(1)$ per-switch regret, while stress-free updates incur $\Omega(1)$ tails. A parameter-free hedge adapts the tilt to unknown drift, whereas persistent over-tilting yields an $\Omega(\lambda^2 T)$ stationary penalty. We further obtain calibrated-stress bounds and extensions to second-order updates, bandit feedback, outliers, stress variation, distributed optimization, and plug-in KL-drift estimation. The framework unifies dynamic-regret analysis, distributionally robust objectives, and KL-regularized control within a single stress-adaptive update.
Similar Papers
Stress-Aware Resilient Neural Training
Machine Learning (CS)
Helps computers learn better when things are tough.
Learning bounds for doubly-robust covariate shift adaptation
Statistics Theory
Makes computer learning work better with new data.
STAR: Stability-Inducing Weight Perturbation for Continual Learning
Machine Learning (CS)
Keeps computers remembering old lessons while learning new ones.