Score: 0

Reinforcement Learning with Function Approximation for Non-Markov Processes

Published: January 1, 2026 | arXiv ID: 2601.00151v1

By: Ali Devran Kara

Potential Business Impact:

Teaches computers to learn from imperfect information.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

We study reinforcement learning methods with linear function approximation under non-Markov state and cost processes. We first consider the policy evaluation method and show that the algorithm converges under suitable ergodicity conditions on the underlying non-Markov processes. Furthermore, we show that the limit corresponds to the fixed point of a joint operator composed of an orthogonal projection and the Bellman operator of an auxiliary \emph{Markov} decision process. For Q-learning with linear function approximation, as in the Markov setting, convergence is not guaranteed in general. We show, however, that for the special case where the basis functions are chosen based on quantization maps, the convergence can be shown under similar ergodicity conditions. Finally, we apply our results to partially observed Markov decision processes, where finite-memory variables are used as state representations, and we derive explicit error bounds for the limits of the resulting learning algorithms.

Country of Origin
🇺🇸 United States

Page Count
47 pages

Category
Computer Science:
Machine Learning (CS)