From Individual Learning to Market Equilibrium: Correcting Structural and Parametric Biases in RL Simulations of Economic Models
By: Zeqiang Zhang, Ruxin Chen
Potential Business Impact:
Teaches computers to make fair economic choices.
The application of Reinforcement Learning (RL) to economic modeling reveals a fundamental conflict between the assumptions of equilibrium theory and the emergent behavior of learning agents. While canonical economic models assume atomistic agents act as `takers' of aggregate market conditions, a naive single-agent RL simulation incentivizes the agent to become a `manipulator' of its environment. This paper first demonstrates this discrepancy within a search-and-matching model with concave production, showing that a standard RL agent learns a non-equilibrium, monopsonistic policy. Additionally, we identify a parametric bias arising from the mismatch between economic discounting and RL's treatment of intertemporal costs. To address both issues, we propose a calibrated Mean-Field Reinforcement Learning framework that embeds a representative agent in a fixed macroeconomic field and adjusts the cost function to reflect economic opportunity costs. Our iterative algorithm converges to a self-consistent fixed point where the agent's policy aligns with the competitive equilibrium. This approach provides a tractable and theoretically sound methodology for modeling learning agents in economic systems within the broader domain of computational social science.
Similar Papers
Learning Closed-Loop Parametric Nash Equilibria of Multi-Agent Collaborative Field Coverage
Multiagent Systems
Teaches robots to cover areas much faster.
Reinforcement Learning and Consumption-Savings Behavior
General Economics
Explains why people spend less after losing jobs.
Reinforcement Learning in Queue-Reactive Models: Application to Optimal Execution
Trading & Market Microstructure
Teaches computers to trade stocks smartly.