Robust Reinforcement Learning in Finance: Modeling Market Impact with Elliptic Uncertainty Sets
By: Shaocong Ma, Heng Huang
Potential Business Impact:
Helps trading robots make better money decisions.
In financial applications, reinforcement learning (RL) agents are commonly trained on historical data, where their actions do not influence prices. However, during deployment, these agents trade in live markets where their own transactions can shift asset prices, a phenomenon known as market impact. This mismatch between training and deployment environments can significantly degrade performance. Traditional robust RL approaches address this model misspecification by optimizing the worst-case performance over a set of uncertainties, but typically rely on symmetric structures that fail to capture the directional nature of market impact. To address this issue, we develop a novel class of elliptic uncertainty sets. We establish both implicit and explicit closed-form solutions for the worst-case uncertainty under these sets, enabling efficient and tractable robust policy evaluation. Experiments on single-asset and multi-asset trading tasks demonstrate that our method achieves superior Sharpe ratio and remains robust under increasing trade volumes, offering a more faithful and scalable approach to RL in financial markets.
Similar Papers
Reinforcement Learning for Trade Execution with Market Impact
Trading & Market Microstructure
Helps traders make more money buying and selling.
From Individual Learning to Market Equilibrium: Correcting Structural and Parametric Biases in RL Simulations of Economic Models
General Economics
Teaches computers to make fair economic choices.
Reinforcement Learning in Financial Decision Making: A Systematic Review of Performance, Challenges, and Implementation Strategies
Computational Finance
Helps computers make smarter money choices.