Quantized Large Language Models in Biomedical Natural Language Processing: Evaluation and Recommendation
By: Zaifu Zhan , Shuang Zhou , Min Zeng and more
Potential Business Impact:
Makes big AI models work on smaller computers.
Large language models have demonstrated remarkable capabilities in biomedical natural language processing, yet their rapid growth in size and computational requirements present a major barrier to adoption in healthcare settings where data privacy precludes cloud deployment and resources are limited. In this study, we systematically evaluated the impact of quantization on 12 state-of-the-art large language models, including both general-purpose and biomedical-specific models, across eight benchmark datasets covering four key tasks: named entity recognition, relation extraction, multi-label classification, and question answering. We show that quantization substantially reduces GPU memory requirements-by up to 75%-while preserving model performance across diverse tasks, enabling the deployment of 70B-parameter models on 40GB consumer-grade GPUs. In addition, domain-specific knowledge and responsiveness to advanced prompting methods are largely maintained. These findings provide significant practical and guiding value, highlighting quantization as a practical and effective strategy for enabling the secure, local deployment of large yet high-capacity language models in biomedical contexts, bridging the gap between technical advances in AI and real-world clinical translation.
Similar Papers
Resource-Efficient Language Models: Quantization for Fast and Accessible Inference
Artificial Intelligence
Makes big computer brains use less power.
LLM Compression: How Far Can We Go in Balancing Size and Performance?
Computation and Language
Makes smart computer programs run faster and smaller.
How Quantization Shapes Bias in Large Language Models
Computation and Language
Makes AI fairer by checking how it learns.