A New DAPO Algorithm for Stock Trading
By: Ruijian Zha, Bojun Liu
Potential Business Impact:
Makes trading computers smarter and faster.
Recent advances in reinforcement learning, such as Dynamic Sampling Policy Optimization (DAPO), show strong performance when paired with large language models (LLMs). Motivated by this success, we ask whether similar gains can be realized in financial trading. We design a trading agent that combines an improved Group Relative Policy Optimization (GRPO) algorithm, augmented with ideas from DAPO, with LLM-based risk and sentiment signals extracted from financial news. On the NASDAQ-100 index (FNSPID dataset), our agent attains a cumulative return of 230.49 percent and an information ratio of 0.37, outperforming the CPPO-DeepSeek baseline. It also cuts training time from about 8 hours to 2.5 hours over 100 epochs while markedly reducing RAM usage. The proposed RL-LLM framework offers a scalable path toward data-efficient trading agents. Code: https://github.com/Ruijian-Zha/FinRL-DAPO-SR/
Similar Papers
Comparative Analysis and Parametric Tuning of PPO, GRPO, and DAPO for LLM Reasoning Enhancement
Artificial Intelligence
Teaches computers to think better and solve problems.
DAPO: An Open-Source LLM Reinforcement Learning System at Scale
Machine Learning (CS)
Teaches AI to solve hard math problems better.
DCPO: Dynamic Clipping Policy Optimization
Computation and Language
Makes AI better at learning from its own answers.