Score: 0

What Can Be Recovered Under Sparse Adversarial Corruption? Assumption-Free Theory for Linear Measurements

Published: October 28, 2025 | arXiv ID: 2510.24215v1

By: Vishal Halder , Alexandre Reiffers-Masson , Abdeldjalil Aïssa-El-Bey and more

Potential Business Impact:

Finds hidden information in messy data.

Business Areas:
A/B Testing Data and Analytics

Let $\mathbf{A} \in \mathbb{R}^{m \times n}$ be an arbitrary, known matrix and $\mathbf{e}$ a $q$-sparse adversarial vector. Given $\mathbf{y} = \mathbf{A} x^* + \mathbf{e}$ and $q$, we seek the smallest set containing $x^*$-hence the one conveying maximal information about $x^*$-that is uniformly recoverable from $\mathbf{y}$ without knowing $\mathbf{e}$. While exact recovery of $x^*$ via strong (and often impractical) structural assumptions on $\mathbf{A}$ or $x^*$ (for example, restricted isometry, sparsity) is well studied, recoverability for arbitrary $\mathbf{A}$ and $x^*$ remains open. Our main result shows that the best that one can hope to recover is $x^* + \ker(\mathbf{U})$, where $\mathbf{U}$ is the unique projection matrix onto the intersection of rowspaces of all possible submatrices of $\mathbf{A}$ obtained by deleting $2q$ rows. Moreover, we prove that every $x$ that minimizes the $\ell_0$-norm of $\mathbf{y} - \mathbf{A} x$ lies in $x^* + \ker(\mathbf{U})$, which then gives a constructive approach to recover this set.

Page Count
5 pages

Category
Computer Science:
Information Theory