Optimal Proximity Gap for Folded Reed--Solomon Codes via Subspace Designs
By: Fernando Granha Jeronimo, Lenny Liu, Pranav Rajpal
A collection of sets satisfies a $(δ,\varepsilon)$-proximity gap with respect to some property if for every set in the collection, either (i) all members of the set are $δ$-close to the property in (relative) Hamming distance, or (ii) only a small $\varepsilon$-fraction of members are $δ$-close to the property. In a seminal work, Ben-Sasson \textit{et al.}\ showed that the collection of affine subspaces exhibits a $(δ,\varepsilon)$-proximity gap with respect to the property of being Reed--Solomon (RS) codewords with $δ$ up to the so-called Johnson bound for list decoding. Their technique relies on the Guruswami--Sudan list decoding algorithm for RS codes, which is guaranteed to work in the Johnson bound regime. Folded Reed--Solomon (FRS) codes are known to achieve the optimal list decoding radius $δ$, a regime known as capacity. Moreover, a rich line of list decoding algorithms was developed for FRS codes. It is then natural to ask if FRS codes can be shown to exhibit an analogous $(δ,\varepsilon)$-proximity gap, but up to the so-called optimal capacity regime. We answer this question in the affirmative (and the framework naturally applies more generally to suitable subspace-design codes). An additional motivation to understand proximity gaps for FRS codes is the recent results [BCDZ'25] showing that they exhibit properties similar to random linear codes, which were previously shown to be related to properties of RS codes with random evaluation points in [LMS'25], as well as codes over constant-size alphabet based on AEL [JS'25].
Similar Papers
Structure Theorems (and Fast Algorithms) for List Recovery of Subspace-Design Codes
Information Theory
Finds hidden messages even with many mistakes.
Algorithmic Improvements to List Decoding of Folded Reed-Solomon Codes
Information Theory
Makes computers fix broken data faster.
Algorithmic Improvements to List Decoding of Folded Reed-Solomon Codes
Information Theory
Makes computers fix more mistakes in data.