FlashMem: Distilling Intrinsic Latent Memory via Computation Reuse
By: Yubo Hou , Zhisheng Chen , Tao Wan and more
Potential Business Impact:
Lets AI remember longer conversations without slowing down.
The stateless architecture of Large Language Models inherently lacks the mechanism to preserve dynamic context, compelling agents to redundantly reprocess history to maintain long-horizon autonomy. While latent memory offers a solution, current approaches are hindered by architectural segregation, relying on auxiliary encoders that decouple memory from the reasoning backbone. We propose FlashMem, a framework that distills intrinsic memory directly from transient reasoning states via computation reuse. Leveraging the property that internal representations uniquely encode input trajectories, FlashMem identifies the last hidden state as a sufficient statistic for the interaction history. This enables a Shared-KV Consolidator to synthesize memory by attending directly to the backbone's frozen cache, eliminating redundant re-parameterization. Furthermore, a parameter-free Cognitive Monitor leverages attention entropy to adaptively trigger consolidation only when high epistemic uncertainty is detected. Experiments demonstrate that FlashMem matches the performance of heavy baselines while reducing inference latency by 5 times, effectively bridging the gap between efficiency and persistent cognition.
Similar Papers
Reuse, Don't Recompute: Efficient Large Reasoning Model Inference via Memory Orchestration
Multiagent Systems
Lets computers remember answers to save time.
SimpleMem: Efficient Lifelong Memory for LLM Agents
Artificial Intelligence
Makes AI remember more with less effort.
Hindsight is 20/20: Building Agent Memory that Retains, Recalls, and Reflects
Computation and Language
Helps AI remember and explain its thoughts better.