FinFlowRL: An Imitation-Reinforcement Learning Framework for Adaptive Stochastic Control in Finance
By: Yang Li, Zhi Chen
Potential Business Impact:
Helps money managers make smarter choices in changing markets.
Traditional stochastic control methods in finance struggle in real world markets due to their reliance on simplifying assumptions and stylized frameworks. Such methods typically perform well in specific, well defined environments but yield suboptimal results in changed, non stationary ones. We introduce FinFlowRL, a novel framework for financial optimal stochastic control. The framework pretrains an adaptive meta policy learning from multiple expert strategies, then finetunes through reinforcement learning in the noise space to optimize the generative process. By employing action chunking generating action sequences rather than single decisions, it addresses the non Markovian nature of markets. FinFlowRL consistently outperforms individually optimized experts across diverse market conditions.
Similar Papers
FinFlowRL: An Imitation-Reinforcement Learning Framework for Adaptive Stochastic Control in Finance
Computational Finance
Teaches computers to make money in changing markets.
Flow-Based Policy for Online Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn new skills faster.
FinRL Contests: Benchmarking Data-driven Financial Reinforcement Learning Agents
Computational Engineering, Finance, and Science
Helps computers trade money better and faster.