TOGGLE: Temporal Logic-Guided Large Language Model Compression for Edge
By: Khurram Khalil, Khaza Anuarul Hoque
Large Language Models (LLMs) deliver exceptional performance across natural language tasks but demand substantial computational resources, limiting their deployment on resource-constrained edge devices. Existing compression techniques, such as quantization and pruning, often degrade critical linguistic properties and lack formal guarantees for preserving model behavior. We propose Temporal Logic-Guided Large Language Model Compression (TOGGLE), a novel framework that leverages Signal Temporal Logic (STL) to formally specify and enforce linguistic properties during compression. TOGGLE employs an STL robustness-guided Bayesian optimization to systematically explore layer-wise quantization and pruning configurations, generating compressed models that formally satisfy specified linguistic constraints without retraining or fine-tuning. Evaluating TOGGLE on four LLM architectures (GPT-2, DeepSeek-V2 7B, LLaMA 3 8B, and Mistral 7B), we achieve up to 3.3x reduction in computational costs (FLOPs) and up to a 68.8% reduction in model size while satisfying all linguistic properties. TOGGLE represents the first integration of formal methods into LLM compression, enabling efficient, verifiable deployment of LLMs on edge hardware.
Similar Papers
LLM Compression: How Far Can We Go in Balancing Size and Performance?
Computation and Language
Makes smart computer programs run faster and smaller.
Language-Guided Temporal Token Pruning for Efficient VideoLLM Processing
CV and Pattern Recognition
Lets computers watch long videos faster.
Spatio-Temporal Pruning for Compressed Spiking Large Language Models
Neural and Evolutionary Computing
Makes smart computer brains use less power.