Score: 0

Learning from Supervision with Semantic and Episodic Memory: A Reflective Approach to Agent Adaptation

Published: October 22, 2025 | arXiv ID: 2510.19897v1

By: Jackson Hassell , Dan Zhang , Hannah Kim and more

Potential Business Impact:

Helps AI learn from mistakes without retraining.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

We investigate how agents built on pretrained large language models can learn target classification functions from labeled examples without parameter updates. While conventional approaches like fine-tuning are often costly, inflexible, and opaque, we propose a memory-augmented framework that leverages both labeled data and LLM-generated critiques. Our framework uses episodic memory to store instance-level critiques-capturing specific past experiences-and semantic memory to distill these into reusable, task-level guidance. Across a diverse set of tasks, incorporating critiques yields up to a 24.8 percent accuracy improvement over retrieval-based (RAG-style) baselines that rely only on labels. Through extensive empirical evaluation, we uncover distinct behavioral differences between OpenAI and opensource models, particularly in how they handle fact-oriented versus preference-based data. To interpret how models respond to different representations of supervision encoded in memory, we introduce a novel metric, suggestibility. This helps explain observed behaviors and illuminates how model characteristics and memory strategies jointly shape learning dynamics. Our findings highlight the promise of memory-driven, reflective learning for building more adaptive and interpretable LLM agents.

Page Count
11 pages

Category
Computer Science:
Computation and Language