TensorSLM: Energy-efficient Embedding Compression of Sub-billion Parameter Language Models on Low-end Devices
By: Mingxue Xu, Yao Lei Xu, Danilo P. Mandic
Potential Business Impact:
Makes small AI models run faster, use less power.
Small Language Models (SLMs, or on-device LMs) have significantly fewer parameters than Large Language Models (LLMs). They are typically deployed on low-end devices, like mobile phones and single-board computers. Unlike LLMs, which rely on increasing model size for better generalisation, SLMs designed for edge applications are expected to have adaptivity to the deployment environments and energy efficiency given the device battery life constraints, which are not addressed in datacenter-deployed LLMs. This paper addresses these two requirements by proposing a training-free token embedding compression approach using Tensor-Train Decomposition (TTD). Each pre-trained token embedding vector is converted into a lower-dimensional Matrix Product State (MPS). We comprehensively evaluate the extracted low-rank structures across compression ratio, language task performance, latency, and energy consumption on a typical low-end device, i.e. Raspberry Pi. Taking the sub-billion parameter versions of GPT-2/Cerebres-GPT and OPT models as examples, our approach achieves a comparable language task performance to the original model with around $2.0\times$ embedding layer compression, while the energy consumption of a single query drops by half.
Similar Papers
Scaling Up Efficient Small Language Models Serving and Deployment for Semantic Job Search
Information Retrieval
Makes smart search engines faster and cheaper.
SLMQuant:Benchmarking Small Language Model Quantization for Practical Deployment
Machine Learning (CS)
Makes small AI models work on phones.
Regional Tiny Stories: Using Small Models to Compare Language Learning and Tokenizer Performance
Computation and Language
Helps small computers understand Indian languages.