Optimizing LLMs Using Quantization for Mobile Execution
By: Agatsya Yadav, Renta Chintala Bhargavi
Potential Business Impact:
Makes big AI models fit on your phone.
Large Language Models (LLMs) offer powerful capabilities, but their significant size and computational requirements hinder deployment on resource-constrained mobile devices. This paper investigates Post-Training Quantization (PTQ) for compressing LLMs for mobile execution. We apply 4-bit PTQ using the BitsAndBytes library with the Hugging Face Transformers framework to Meta's Llama 3.2 3B model. The quantized model is converted to GGUF format using llama.cpp tools for optimized mobile inference. The PTQ workflow achieves a 68.66% reduction in model size through 4-bit quantization, enabling the Llama 3.2 3B model to run efficiently on an Android device. Qualitative validation shows that the 4-bit quantized model can perform inference tasks successfully. We demonstrate the feasibility of running the quantized GGUF model on an Android device using the Termux environment and the Ollama framework. PTQ, especially at 4-bit precision combined with mobile-optimized formats like GGUF, provides a practical pathway for deploying capable LLMs on mobile devices, balancing model size and performance.
Similar Papers
LLMPi: Optimizing LLMs for High-Throughput on Raspberry Pi
Machine Learning (CS)
Makes smart computer talk work on small devices.
LLM Compression: How Far Can We Go in Balancing Size and Performance?
Computation and Language
Makes smart computer programs run faster and smaller.
Performance Trade-offs of Optimizing Small Language Models for E-Commerce
Artificial Intelligence
Makes small computers understand online shoppers better.