FinFlowRL: An Imitation-Reinforcement Learning Framework for Adaptive Stochastic Control in Finance
By: Yang Li , Zhi Chen , Steve Y. Yang and more
Potential Business Impact:
Teaches computers to make money in changing markets.
Traditional stochastic control methods in finance rely on simplifying assumptions that often fail in real world markets. While these methods work well in specific, well defined scenarios, they underperform when market conditions change. We introduce FinFlowRL, a novel framework for financial stochastic control that combines imitation learning with reinforcement learning. The framework first pretrains an adaptive meta policy by learning from multiple expert strategies, then finetunes it through reinforcement learning in the noise space to optimize the generation process. By employing action chunking, that is generating sequences of actions rather than single decisions, it addresses the non Markovian nature of financial markets. FinFlowRL consistently outperforms individually optimized experts across diverse market conditions.
Similar Papers
FinFlowRL: An Imitation-Reinforcement Learning Framework for Adaptive Stochastic Control in Finance
Computational Finance
Helps money managers make smarter choices in changing markets.
Flow-Based Policy for Online Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn new skills faster.
FinRL Contests: Benchmarking Data-driven Financial Reinforcement Learning Agents
Computational Engineering, Finance, and Science
Helps computers trade money better and faster.