Advancing Model Refinement: Muon-Optimized Distillation and Quantization for LLM Deployment
By: Jacob Sander, Brian Jalaian, Venkat R. Dasari
Large Language Models (LLMs) enable advanced natural language processing but face deployment challenges on resource-constrained edge devices due to high computational, memory, and energy demands. Optimizing these models requires addressing three key challenges: acquiring task-specific data, fine-tuning for performance, and compressing models to accelerate inference while reducing resource demands. We propose an integrated framework combining GPTQ-based quantization, low-rank adaptation (LoRA), and a specialized data distillation process to significantly reduce model size and complexity while preserving or enhancing task-specific performance. By leveraging data distillation, knowledge distillation via Kullback-Leibler divergence, Bayesian hyperparameter optimization, and the Muon optimizer, our pipeline achieves up to 2x memory compression (e.g., reducing a 6GB model to 3GB) and enables efficient inference for specialized tasks. Empirical results demonstrate superior performance on standard LLM benchmarks compared to GPTQ quantization alone, with the Muon optimizer notably enhancing fine-tuned models' resistance to accuracy decay during quantization.
Similar Papers
UniQL: Unified Quantization and Low-rank Compression for Adaptive Edge LLMs
Machine Learning (CS)
Makes smart phone AI run much faster and smaller.
Optimizing LLMs Using Quantization for Mobile Execution
Machine Learning (CS)
Makes big AI models fit on your phone.
LLMPi: Optimizing LLMs for High-Throughput on Raspberry Pi
Machine Learning (CS)
Makes smart computer talk work on small devices.