Score: 0

Reuse, Don't Recompute: Efficient Large Reasoning Model Inference via Memory Orchestration

Published: November 17, 2025 | arXiv ID: 2511.12987v1

By: Daivik Patel, Shrenik Patel

Potential Business Impact:

Lets computers remember answers to save time.

Business Areas:
Semantic Search Internet Services

Large reasoning models (LRMs) achieve strong accuracy through test-time scaling, generating longer chains of thought or sampling multiple solutions, but at steep costs in tokens and latency. We argue that memory is a core ingredient for efficient reasoning: when evidence already exists, models should think less by reusing structured memory instead of recomputing derivations. We present ENGRAM-R, an inference-time memory layer that integrates typed retrieval with compact fact card representations and explicit citation control. On the LoCoMo benchmark, ENGRAM-R reduces input tokens by 85% and reasoning tokens by 75% compared to full context while maintaining high accuracy. On a multi-hop slice of the LongMemEval benchmark, it achieves similar efficiency with substantial accuracy gains. These results show that memory is not only critical for long-horizon correctness but also a practical lever for efficient reasoning under tight compute, memory, and latency budgets.

Country of Origin
🇺🇸 United States

Page Count
15 pages

Category
Computer Science:
Multiagent Systems