SLM-Bench: A Comprehensive Benchmark of Small Language Models on Environmental Impacts--Extended Version
By: Nghiem Thanh Pham , Tung Kieu , Duc-Manh Nguyen and more
Potential Business Impact:
Tests small AI to find best, fastest, greenest.
Small Language Models (SLMs) offer computational efficiency and accessibility, yet a systematic evaluation of their performance and environmental impact remains lacking. We introduce SLM-Bench, the first benchmark specifically designed to assess SLMs across multiple dimensions, including accuracy, computational efficiency, and sustainability metrics. SLM-Bench evaluates 15 SLMs on 9 NLP tasks using 23 datasets spanning 14 domains. The evaluation is conducted on 4 hardware configurations, providing a rigorous comparison of their effectiveness. Unlike prior benchmarks, SLM-Bench quantifies 11 metrics across correctness, computation, and consumption, enabling a holistic assessment of efficiency trade-offs. Our evaluation considers controlled hardware conditions, ensuring fair comparisons across models. We develop an open-source benchmarking pipeline with standardized evaluation protocols to facilitate reproducibility and further research. Our findings highlight the diverse trade-offs among SLMs, where some models excel in accuracy while others achieve superior energy efficiency. SLM-Bench sets a new standard for SLM evaluation, bridging the gap between resource efficiency and real-world applicability.
Similar Papers
SLM-Bench: A Comprehensive Benchmark of Small Language Models on Environmental Impacts -- Extended Version
Computation and Language
Tests small AI to find best, fastest, greenest.
SLMQuant:Benchmarking Small Language Model Quantization for Practical Deployment
Machine Learning (CS)
Makes small AI models work on phones.
HealthSLM-Bench: Benchmarking Small Language Models for Mobile and Wearable Healthcare Monitoring
Artificial Intelligence
Lets health trackers predict problems privately.