Enhancing Machine Learning Model Efficiency through Quantization and Bit Depth Optimization: A Performance Analysis on Healthcare Data
By: Mitul Goswami, Romit Chatterjee
Potential Business Impact:
Makes smart computer programs run much faster.
This research aims to optimize intricate learning models by implementing quantization and bit-depth optimization techniques. The objective is to significantly cut time complexity while preserving model efficiency, thus addressing the challenge of extended execution times in intricate models. Two medical datasets were utilized as case studies to apply a Logistic Regression (LR) machine learning model. Using efficient quantization and bit depth optimization strategies the input data is downscaled from float64 to float32 and int32. The results demonstrated a significant reduction in time complexity, with only a minimal decrease in model accuracy post-optimization, showcasing the state-of-the-art optimization approach. This comprehensive study concludes that the impact of these optimization techniques varies depending on a set of parameters.
Similar Papers
Quantized Large Language Models in Biomedical Natural Language Processing: Evaluation and Recommendation
Computation and Language
Makes big AI models work on smaller computers.
Interpreting the Effects of Quantization on LLMs
Machine Learning (CS)
Makes big computer brains work on small devices.
Bits for Privacy: Evaluating Post-Training Quantization via Membership Inference
Machine Learning (CS)
Makes AI models more private by using less detail.