Engineering Fast and Space-Efficient Recompression from SLP-Compressed Text
By: Ankith Reddy Adudodla, Dominik Kempa
Potential Business Impact:
Builds text indexes much faster, using less memory.
Compressed indexing enables powerful queries over massive and repetitive textual datasets using space proportional to the compressed input. While theoretical advances have led to highly efficient index structures, their practical construction remains a bottleneck (especially for complex components like recompression RLSLP), a grammar-based representation crucial for building powerful text indexes that support widely used suffix array queries. In this work, we present the first implementation of recompression RLSLP construction that runs in compressed time, operating on an LZ77-like approximation of the input. Compared to state-of-the-art uncompressed-time methods, our approach achieves up to 46$\times$ speedup and 17$\times$ lower RAM usage on large, repetitive inputs. These gains unlock scalability to larger datasets and affirm compressed computation as a practical path forward for fast index construction.
Similar Papers
1+1>2: A Synergistic Sparse and Low-Rank Compression Method for Large Language Models
Computation and Language
Makes big AI models smaller and faster.
Lossless Compression of Large Language Model-Generated Text via Next-Token Prediction
Machine Learning (CS)
Makes computer text smaller without losing information.
LZD-style Compression Scheme with Truncation and Repetitions
Data Structures and Algorithms
Makes files smaller, faster, and better.