Score: 1

Product distribution learning with imperfect advice

Published: November 13, 2025 | arXiv ID: 2511.10366v1

By: Arnab Bhattacharyya , Davin Choo , Philips George John and more

Potential Business Impact:

Helps computers learn patterns faster with a hint.

Business Areas:
Personalization Commerce and Shopping

Given i.i.d.~samples from an unknown distribution $P$, the goal of distribution learning is to recover the parameters of a distribution that is close to $P$. When $P$ belongs to the class of product distributions on the Boolean hypercube $\{0,1\}^d$, it is known that $Ω(d/\varepsilon^2)$ samples are necessary to learn $P$ within total variation (TV) distance $\varepsilon$. We revisit this problem when the learner is also given as advice the parameters of a product distribution $Q$. We show that there is an efficient algorithm to learn $P$ within TV distance $\varepsilon$ that has sample complexity $\tilde{O}(d^{1-η}/\varepsilon^2)$, if $\|\mathbf{p} - \mathbf{q}\|_1 < \varepsilon d^{0.5 - Ω(η)}$. Here, $\mathbf{p}$ and $\mathbf{q}$ are the mean vectors of $P$ and $Q$ respectively, and no bound on $\|\mathbf{p} - \mathbf{q}\|_1$ is known to the algorithm a priori.

Country of Origin
🇺🇸 🇬🇧 🇸🇬 Singapore, United States, United Kingdom

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)