Computational Complexity of Statistics: New Insights from Low-Degree Polynomials
By: Alexander S. Wein
Potential Business Impact:
Makes hard computer problems easier to solve.
This is a survey on the use of low-degree polynomials to predict and explain the apparent statistical-computational tradeoffs in a variety of average-case computational problems. In a nutshell, this framework measures the complexity of a statistical task by the minimum degree that a polynomial function must have in order to solve it. The main goals of this survey are to (1) describe the types of problems where the low-degree framework can be applied, encompassing questions of detection (hypothesis testing), recovery (estimation), and more; (2) discuss some philosophical questions surrounding the interpretation of low-degree lower bounds, and notably the extent to which they should be treated as evidence for inherent computational hardness; (3) explore the known connections between low-degree polynomials and other related approaches such as the sum-of-squares hierarchy and statistical query model; and (4) give an overview of the mathematical tools used to prove low-degree lower bounds. A list of open problems is also included.
Similar Papers
Computational lower bounds in latent models: clustering, sparse-clustering, biclustering
Statistics Theory
Finds limits of computer problem-solving abilities.
Efficient Algorithms for Cardinality Estimation and Conjunctive Query Evaluation With Simple Degree Constraints
Databases
Helps computers find data faster by guessing amounts.
A computational transition for detecting multivariate shuffled linear regression by low-degree polynomials
Machine Learning (Stat)
Finds hidden patterns in mixed-up data.