Sherry: Hardware-Efficient 1.25-Bit Ternary Quantization via Fine-grained Sparsification
By: Hong Huang , Decheng Wu , Qiangqiang Hu and more
The deployment of Large Language Models (LLMs) on resource-constrained edge devices is increasingly hindered by prohibitive memory and computational requirements. While ternary quantization offers a compelling solution by reducing weights to {-1, 0, +1}, current implementations suffer from a fundamental misalignment with commodity hardware. Most existing methods must choose between 2-bit aligned packing, which incurs significant bit wastage, or 1.67-bit irregular packing, which degrades inference speed. To resolve this tension, we propose Sherry, a hardware-efficient ternary quantization framework. Sherry introduces a 3:4 fine-grained sparsity that achieves a regularized 1.25-bit width by packing blocks of four weights into five bits, restoring power-of-two alignment. Furthermore, we identify weight trapping issue in sparse ternary training, which leads to representational collapse. To address this, Sherry introduces Arenas, an annealing residual synapse mechanism that maintains representational diversity during training. Empirical evaluations on LLaMA-3.2 across five benchmarks demonstrate that Sherry matches state-of-the-art ternary performance while significantly reducing model size. Notably, on an Intel i7-14700HX CPU, our 1B model achieves zero accuracy loss compared to SOTA baselines while providing 25% bit savings and 10% speed up. The code is available at https://github.com/Tencent/AngelSlim .
Similar Papers
TerEffic: Highly Efficient Ternary LLM Inference on FPGA
Hardware Architecture
Makes AI models run faster on small devices.
The Fourth State: Signed-Zero Ternary for Stable LLM Quantization (and More)
Machine Learning (CS)
Makes computer brains work faster with less power.
ParetoQ: Improving Scaling Laws in Extremely Low-bit LLM Quantization
Machine Learning (CS)
Makes computer models smaller, faster, and smarter.