Local Reinforcement Learning with Action-Conditioned Root Mean Squared Q-Functions
By: Frank Wu, Mengye Ren
Potential Business Impact:
Teaches robots to learn without backward steps.
The Forward-Forward (FF) Algorithm is a recently proposed learning procedure for neural networks that employs two forward passes instead of the traditional forward and backward passes used in backpropagation. However, FF remains largely confined to supervised settings, leaving a gap at domains where learning signals can be yielded more naturally such as RL. In this work, inspired by FF's goodness function using layer activity statistics, we introduce Action-conditioned Root mean squared Q-Functions (ARQ), a novel value estimation method that applies a goodness function and action conditioning for local RL using temporal difference learning. Despite its simplicity and biological grounding, our approach achieves superior performance compared to state-of-the-art local backprop-free RL methods in the MinAtar and the DeepMind Control Suite benchmarks, while also outperforming algorithms trained with backpropagation on most tasks. Code can be found at https://github.com/agentic-learning-ai-lab/arq.
Similar Papers
RL as Regressor: A Reinforcement Learning Approach for Function Approximation
Machine Learning (CS)
Trains predictions using custom game rewards
In Search of Goodness: Large Scale Benchmarking of Goodness Functions for the Forward-Forward Algorithm
Machine Learning (CS)
Makes AI learn better by changing how it judges "good."
Online reinforcement learning via sparse Gaussian mixture model Q-functions
Machine Learning (CS)
Teaches computers to learn faster with less data.