Score: 0

Reservoir Computing inspired Matrix Multiplication-free Language Model

Published: December 29, 2025 | arXiv ID: 2512.23145v1

By: Takumi Shiratsuchi, Yuichiro Tanaka, Hakaru Tamukoh

Potential Business Impact:

Makes AI models faster and use less power.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) have achieved state-of-the-art performance in natural language processing; however, their high computational cost remains a major bottleneck. In this study, we target computational efficiency by focusing on a matrix multiplication free language model (MatMul-free LM) and further reducing the training cost through an architecture inspired by reservoir computing. Specifically, we partially fix and share the weights of selected layers in the MatMul-free LM and insert reservoir layers to obtain rich dynamic representations without additional training overhead. Additionally, several operations are combined to reduce memory accesses. Experimental results show that the proposed architecture reduces the number of parameters by up to 19%, training time by 9.9%, and inference time by 8.0%, while maintaining comparable performance to the baseline model.

Page Count
9 pages

Category
Computer Science:
Computation and Language