A Neural Network Algorithm for KL Divergence Estimation with Quantitative Error Bounds
By: Mikil Foss, Andrew Lamperski
Potential Business Impact:
Helps computers measure how different data is.
Estimating the Kullback-Leibler (KL) divergence between random variables is a fundamental problem in statistical analysis. For continuous random variables, traditional information-theoretic estimators scale poorly with dimension and/or sample size. To mitigate this challenge, a variety of methods have been proposed to estimate KL divergences and related quantities, such as mutual information, using neural networks. The existing theoretical analyses show that neural network parameters achieving low error exist. However, since they rely on non-constructive neural network approximation theorems, they do not guarantee that the existing algorithms actually achieve low error. In this paper, we propose a KL divergence estimation algorithm using a shallow neural network with randomized hidden weights and biases (i.e. a random feature method). We show that with high probability, the algorithm achieves a KL divergence estimation error of $O(m^{-1/2}+T^{-1/3})$, where $m$ is the number of neurons and $T$ is both the number of steps of the algorithm and the number of samples.
Similar Papers
Better Estimation of the KL Divergence Between Language Models
Computation and Language
Makes AI learn better and faster.
Uncertainty Quantification for Incomplete Multi-View Data Using Divergence Measures
CV and Pattern Recognition
Makes computer learning more accurate with messy data.
Head-Tail-Aware KL Divergence in Knowledge Distillation for Spiking Neural Networks
Artificial Intelligence
Teaches computer brains to learn better, faster.