Reservoir Computing inspired Matrix Multiplication-free Language Model
By: Takumi Shiratsuchi, Yuichiro Tanaka, Hakaru Tamukoh
Potential Business Impact:
Makes AI models faster and use less power.
Large language models (LLMs) have achieved state-of-the-art performance in natural language processing; however, their high computational cost remains a major bottleneck. In this study, we target computational efficiency by focusing on a matrix multiplication free language model (MatMul-free LM) and further reducing the training cost through an architecture inspired by reservoir computing. Specifically, we partially fix and share the weights of selected layers in the MatMul-free LM and insert reservoir layers to obtain rich dynamic representations without additional training overhead. Additionally, several operations are combined to reduce memory accesses. Experimental results show that the proposed architecture reduces the number of parameters by up to 19%, training time by 9.9%, and inference time by 8.0%, while maintaining comparable performance to the baseline model.
Similar Papers
Reservoir Computing as a Language Model
Computation and Language
Makes AI learn words faster and cheaper.
Neuromorphic Principles for Efficient Large Language Models on Intel Loihi 2
Neural and Evolutionary Computing
Makes AI models faster and use less power.
System-performance and cost modeling of Large Language Model training and inference
Hardware Architecture
Makes big AI models train and run cheaper.