Score: 1

Small Language Models: Architectures, Techniques, Evaluation, Problems and Future Adaptation

Published: May 26, 2025 | arXiv ID: 2505.19529v2

By: Tanjil Hasan Sakib, Md. Tanzib Hosain, Md. Kishor Morol

Potential Business Impact:

Makes small AI understand and do many tasks.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Small Language Models (SLMs) have gained substantial attention due to their ability to execute diverse language tasks successfully while using fewer computer resources. These models are particularly ideal for deployment in limited environments, such as mobile devices, on-device processing, and edge systems. In this study, we present a complete assessment of SLMs, focussing on their design frameworks, training approaches, and techniques for lowering model size and complexity. We offer a novel classification system to organize the optimization approaches applied for SLMs, encompassing strategies like pruning, quantization, and model compression. Furthermore, we assemble SLM's studies of evaluation suite with some existing datasets, establishing a rigorous platform for measuring SLM capabilities. Alongside this, we discuss the important difficulties that remain unresolved in this sector, including trade-offs between efficiency and performance, and we suggest directions for future study. We anticipate this study to serve as a beneficial guide for researchers and practitioners who aim to construct compact, efficient, and high-performing language models.

Country of Origin
🇺🇸 🇧🇩 United States, Bangladesh

Page Count
9 pages

Category
Computer Science:
Computation and Language