Instance Dependent Testing of Samplers using Interval Conditioning
By: Rishiraj Bhattacharyya , Sourav Chakraborty , Yash Pote and more
Sampling algorithms play a pivotal role in probabilistic AI. However, verifying if a sampler program indeed samples from the claimed distribution is a notoriously hard problem. Provably correct testers like Barbarik, Teq, Flash, CubeProbe for testing of different kinds of samplers were proposed only in the last few years. All these testers focus on the worst-case efficiency, and do not support verification of samplers over infinite domains, a case occurring frequently in Astronomy, Finance, Network Security, etc. In this work, we design the first tester of samplers with instance-dependent efficiency, allowing us to test samplers over natural numbers. Our tests are developed via a novel distance estimation algorithm between an unknown and a known probability distribution using an interval conditioning framework. The core technical contribution is a new connection with probability mass estimation of a continuous distribution. The practical gains are also substantial: our experiments establish up to 1000x speedup over state-of-the-art testers.
Similar Papers
Verifying Sampling Algorithms via Distributional Invariants
Logic in Computer Science
Proves computer programs that use randomness work right.
Testing Uniform Random Samplers: Methods, Datasets and Protocols
Logic in Computer Science
Tests computer programs to find hidden problems.
Assessing the Quality of Binomial Samplers: A Statistical Distance Framework
Computation
Makes computer math more trustworthy and correct.