Approximate Bayesian Inference via Bitstring Representations
By: Aleksanteri Sladek, Martin Trapp, Arno Solin
Potential Business Impact:
Teaches computers to learn from less data.
The machine learning community has recently put effort into quantized or low-precision arithmetics to scale large models. This paper proposes performing probabilistic inference in the quantized, discrete parameter space created by these representations, effectively enabling us to learn a continuous distribution using discrete parameters. We consider both 2D densities and quantized neural networks, where we introduce a tractable learning approach using probabilistic circuits. This method offers a scalable solution to manage complex distributions and provides clear insights into model behavior. We validate our approach with various models, demonstrating inference efficiency without sacrificing accuracy. This work advances scalable, interpretable machine learning by utilizing discrete approximations for probabilistic computations.
Similar Papers
SQS: Bayesian DNN Compression through Sparse Quantized Sub-distributions
Machine Learning (CS)
Makes AI smaller and faster for phones.
One-Bit Quantization for Random Features Models
Machine Learning (CS)
Makes AI smarter and faster using less computer power.
Convergence for Discrete Parameter Updates
Machine Learning (CS)
Makes computer learning faster and use less power.