What Can Be Recovered Under Sparse Adversarial Corruption? Assumption-Free Theory for Linear Measurements
By: Vishal Halder , Alexandre Reiffers-Masson , Abdeldjalil Aïssa-El-Bey and more
Potential Business Impact:
Finds hidden information in messy data.
Let $\mathbf{A} \in \mathbb{R}^{m \times n}$ be an arbitrary, known matrix and $\mathbf{e}$ a $q$-sparse adversarial vector. Given $\mathbf{y} = \mathbf{A} x^* + \mathbf{e}$ and $q$, we seek the smallest set containing $x^*$-hence the one conveying maximal information about $x^*$-that is uniformly recoverable from $\mathbf{y}$ without knowing $\mathbf{e}$. While exact recovery of $x^*$ via strong (and often impractical) structural assumptions on $\mathbf{A}$ or $x^*$ (for example, restricted isometry, sparsity) is well studied, recoverability for arbitrary $\mathbf{A}$ and $x^*$ remains open. Our main result shows that the best that one can hope to recover is $x^* + \ker(\mathbf{U})$, where $\mathbf{U}$ is the unique projection matrix onto the intersection of rowspaces of all possible submatrices of $\mathbf{A}$ obtained by deleting $2q$ rows. Moreover, we prove that every $x$ that minimizes the $\ell_0$-norm of $\mathbf{y} - \mathbf{A} x$ lies in $x^* + \ker(\mathbf{U})$, which then gives a constructive approach to recover this set.
Similar Papers
What Can Be Recovered Under Sparse Adversarial Corruption? Assumption-Free Theory for Linear Measurements
Information Theory
Finds hidden information even with noisy data.
On robust recovery of signals from indirect observations
Statistics Theory
Fixes messy data to find hidden information.
Improved Bounds on Access-Redundancy Tradeoffs in Quantized Linear Computations
Information Theory
Makes computers guess answers with fewer questions.