Score: 0

FlashMem: Distilling Intrinsic Latent Memory via Computation Reuse

Published: January 9, 2026 | arXiv ID: 2601.05505v1

By: Yubo Hou , Zhisheng Chen , Tao Wan and more

Potential Business Impact:

Lets AI remember longer conversations without slowing down.

Business Areas:
Flash Storage Hardware

The stateless architecture of Large Language Models inherently lacks the mechanism to preserve dynamic context, compelling agents to redundantly reprocess history to maintain long-horizon autonomy. While latent memory offers a solution, current approaches are hindered by architectural segregation, relying on auxiliary encoders that decouple memory from the reasoning backbone. We propose FlashMem, a framework that distills intrinsic memory directly from transient reasoning states via computation reuse. Leveraging the property that internal representations uniquely encode input trajectories, FlashMem identifies the last hidden state as a sufficient statistic for the interaction history. This enables a Shared-KV Consolidator to synthesize memory by attending directly to the backbone's frozen cache, eliminating redundant re-parameterization. Furthermore, a parameter-free Cognitive Monitor leverages attention entropy to adaptively trigger consolidation only when high epistemic uncertainty is detected. Experiments demonstrate that FlashMem matches the performance of heavy baselines while reducing inference latency by 5 times, effectively bridging the gap between efficiency and persistent cognition.

Country of Origin
🇨🇳 China

Page Count
18 pages

Category
Computer Science:
Computation and Language