Hybrid LSTM and PPO Networks for Dynamic Portfolio Optimization
By: Jun Kevin, Pujianto Yugopuspito
Potential Business Impact:
Helps computers make smarter money choices.
This paper introduces a hybrid framework for portfolio optimization that fuses Long Short-Term Memory (LSTM) forecasting with a Proximal Policy Optimization (PPO) reinforcement learning strategy. The proposed system leverages the predictive power of deep recurrent networks to capture temporal dependencies, while the PPO agent adaptively refines portfolio allocations in continuous action spaces, allowing the system to anticipate trends while adjusting dynamically to market shifts. Using multi-asset datasets covering U.S. and Indonesian equities, U.S. Treasuries, and major cryptocurrencies from January 2018 to December 2024, the model is evaluated against several baselines, including equal-weight, index-style, and single-model variants (LSTM-only and PPO-only). The framework's performance is benchmarked against equal-weighted, index-based, and single-model approaches (LSTM-only and PPO-only) using annualized return, volatility, Sharpe ratio, and maximum drawdown metrics, each adjusted for transaction costs. The results indicate that the hybrid architecture delivers higher returns and stronger resilience under non-stationary market regimes, suggesting its promise as a robust, AI-driven framework for dynamic portfolio optimization.
Similar Papers
Adaptive and Regime-Aware RL for Portfolio Optimization
Portfolio Management
Helps computers make smarter money choices.
A Deep Reinforcement Learning Approach to Automated Stock Trading, using xLSTM Networks
Computational Engineering, Finance, and Science
Helps computers trade stocks better and make more money.
From Headlines to Holdings: Deep Learning for Smarter Portfolio Decisions
Portfolio Management
Helps money managers pick winning stocks better.