Near Optimal Hardness of Approximating $k$-CSP
By: Dor Minzer, Kai Zhe Zheng
Potential Business Impact:
Makes it hard to solve some tough math puzzles.
We show that for every $k\in\mathbb{N}$ and $\varepsilon>0$, for large enough alphabet $R$, given a $k$-CSP with alphabet size $R$, it is NP-hard to distinguish between the case that there is an assignment satisfying at least $1-\varepsilon$ fraction of the constraints, and the case no assignment satisfies more than $1/R^{k-1-\varepsilon}$ of the constraints. This result improves upon prior work of [Chan, Journal of the ACM 2016], who showed the same result with weaker soundness of $O(k/R^{k-2})$, and nearly matches the trivial approximation algorithm that finds an assignment satisfying at least $1/R^{k-1}$ fraction of the constraints. Our proof follows the approach of a recent work by the authors, wherein the above result is proved for $k=2$. Our main new ingredient is a counting lemma for hyperedges between pseudo-random sets in the Grassmann graphs, which may be of independent interest.
Similar Papers
Tight Bounds for Sparsifying Random CSPs
Data Structures and Algorithms
Makes big problems simpler by removing unneeded parts.
A Simpler Exponential-Time Approximation Algorithm for MAX-k-SAT
Data Structures and Algorithms
Finds good answers to hard puzzles faster.
Min-CSPs on Complete Instances II: Polylogarithmic Approximation for Min-NAE-3-SAT
Data Structures and Algorithms
Solves hard computer puzzles faster, even for many variables.