Score: 1

MemRL: Self-Evolving Agents via Runtime Reinforcement Learning on Episodic Memory

Published: January 6, 2026 | arXiv ID: 2601.03192v1

By: Shengtao Zhang , Jiaqian Wang , Ruiwen Zhou and more

Potential Business Impact:

Teaches computers to learn new things like people.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

The hallmark of human intelligence is the ability to master new skills through Constructive Episodic Simulation-retrieving past experiences to synthesize solutions for novel tasks. While Large Language Models possess strong reasoning capabilities, they struggle to emulate this self-evolution: fine-tuning is computationally expensive and prone to catastrophic forgetting, while existing memory-based methods rely on passive semantic matching that often retrieves noise. To address these challenges, we propose MemRL, a framework that enables agents to self-evolve via non-parametric reinforcement learning on episodic memory. MemRL explicitly separates the stable reasoning of a frozen LLM from the plastic, evolving memory. Unlike traditional methods, MemRL employs a Two-Phase Retrieval mechanism that filters candidates by semantic relevance and then selects them based on learned Q-values (utility). These utilities are continuously refined via environmental feedback in an trial-and-error manner, allowing the agent to distinguish high-value strategies from similar noise. Extensive experiments on HLE, BigCodeBench, ALFWorld, and Lifelong Agent Bench demonstrate that MemRL significantly outperforms state-of-the-art baselines. Our analysis experiments confirm that MemRL effectively reconciles the stability-plasticity dilemma, enabling continuous runtime improvement without weight updates.

Country of Origin
🇨🇳 China

Page Count
23 pages

Category
Computer Science:
Computation and Language