Score: 2

MemBuilder: Reinforcing LLMs for Long-Term Memory Construction via Attributed Dense Rewards

Published: January 9, 2026 | arXiv ID: 2601.05488v1

By: Zhiyu Shen , Ziming Wu , Fuming Lai and more

BigTech Affiliations: Tencent

Potential Business Impact:

Teaches computers to remember conversations better.

Business Areas:
Multi-level Marketing Sales and Marketing

Maintaining consistency in long-term dialogues remains a fundamental challenge for LLMs, as standard retrieval mechanisms often fail to capture the temporal evolution of historical states. While memory-augmented frameworks offer a structured alternative, current systems rely on static prompting of closed-source models or suffer from ineffective training paradigms with sparse rewards. We introduce MemBuilder, a reinforcement learning framework that trains models to orchestrate multi-dimensional memory construction with attributed dense rewards. MemBuilder addresses two key challenges: (1) Sparse Trajectory-Level Rewards: we employ synthetic session-level question generation to provide dense intermediate rewards across extended trajectories; and (2) Multi-Dimensional Memory Attribution: we introduce contribution-aware gradient weighting that scales policy updates based on each component's downstream impact. Experimental results show that MemBuilder enables a 4B-parameter model to outperform state-of-the-art closed-source baselines, exhibiting strong generalization across long-term dialogue benchmarks.

Country of Origin
πŸ‡¨πŸ‡³ China

Page Count
19 pages

Category
Computer Science:
Computation and Language