Improved Sample Complexity for Full Coverage in Compact and Continuous Spaces
By: Lyu Yuhuan
Potential Business Impact:
Finds more things with fewer tries.
Verifying uniform conditions over continuous spaces through random sampling is fundamental in machine learning and control theory, yet classical coverage analyses often yield conservative bounds, particularly at small failure probabilities. We study uniform random sampling on the $d$-dimensional unit hypercube and analyze the number of uncovered subcubes after discretization. By applying a concentration inequality to the uncovered-count statistic, we derive a sample complexity bound with a logarithmic dependence on the failure probability ($δ$), i.e., $M =O( \tilde{C}\ln(\frac{2\tilde{C}}δ))$, which contrasts sharply with the classical linear $1/δ$ dependence. Under standard Lipschitz and uniformity assumptions, we present a self-contained derivation and compare our result with classical coupon-collector rates. Numerical studies across dimensions, precision levels, and confidence targets indicate that our bound tracks practical coverage requirements more tightly and scales favorably as $δ\to 0$. Our findings offer a sharper theoretical tool for algorithms that rely on grid-based coverage guarantees, enabling more efficient sampling, especially in high-confidence regimes.
Similar Papers
Computing High-dimensional Confidence Sets for Arbitrary Distributions
Data Structures and Algorithms
Finds best shapes to cover data points.
Dimension-Free Correlated Sampling for the Hypersimplex
Data Structures and Algorithms
Makes computer programs share information better.
General Coverage Models: Structure, Monotonicity, and Shotgun Sequencing
Information Theory
Finds how many tries to see all DNA pieces.