Score: 1

Optimizing LLMs Using Quantization for Mobile Execution

Published: December 6, 2025 | arXiv ID: 2512.06490v1

By: Agatsya Yadav, Renta Chintala Bhargavi

Potential Business Impact:

Makes big AI models fit on your phone.

Business Areas:
Quantum Computing Science and Engineering

Large Language Models (LLMs) offer powerful capabilities, but their significant size and computational requirements hinder deployment on resource-constrained mobile devices. This paper investigates Post-Training Quantization (PTQ) for compressing LLMs for mobile execution. We apply 4-bit PTQ using the BitsAndBytes library with the Hugging Face Transformers framework to Meta's Llama 3.2 3B model. The quantized model is converted to GGUF format using llama.cpp tools for optimized mobile inference. The PTQ workflow achieves a 68.66% reduction in model size through 4-bit quantization, enabling the Llama 3.2 3B model to run efficiently on an Android device. Qualitative validation shows that the 4-bit quantized model can perform inference tasks successfully. We demonstrate the feasibility of running the quantized GGUF model on an Android device using the Termux environment and the Ollama framework. PTQ, especially at 4-bit precision combined with mobile-optimized formats like GGUF, provides a practical pathway for deploying capable LLMs on mobile devices, balancing model size and performance.

Country of Origin
🇮🇳 India

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)