Range-Coder with fast Adaptation and Table-Based Decoding
By: Tilo Strutz, Roman Rischke
Potential Business Impact:
Makes data compression much faster for computers.
The transmission or storage of signals typically involves data compression. The final processing step in compression systems is generally an entropy coding stage, which converts symbols into a bit stream based on their probability distribution. A distinct class of entropy coding methods operates not by mapping input symbols to discrete codewords but by operating on intervals or ranges. This approach enables a more accurate approximation of the source entropy, particularly for sources with highly skewed or varying symbol distributions. Representative techniques in this category include traditional arithmetic coding, range coding, and methods based on asymmetric numeral systems (ANS). The complexity of these methods depends mainly on three processing steps: the core routines of encoding and decoding doing the calculations, the interval-based determination of the correct symbol at decoder, and the efforts of keeping updated with respect to the varying symbol distribution. The interval-based symbol determination at decoder typically demands for a searching procedure. In previous literature, it could be shown that the search can be replaced by a table-based approach with only O(1)-complexity but having the side-effect that the adaptation of the symbols statistic becomes infeasible because of the high time-consumption of adapting the table. We propose an adaptation process using a ring-buffer technique enabling the adaptive table-based decoding procedure as well as the replacement of a division by a bit-shift operation at encoder and decoder core routines. This accelerates the coding process significantly. In static (non-adaptive) mode, the coding time can be reduced by about 40 percent. In adaptive mode, the proposed technique is faster than alternative approaches for alphabets from about 12 to 64 different symbol when comparing the overall encoder+decoder time.
Similar Papers
Information-Theoretic Equivalences Across Rate-Distortion, Quantization, and Decoding
Information Theory
Makes data compression and error correction work together.
Redefining Information Theory: From Quantization and Rate--Distortion to a Foundational Mathematical Framework
Information Theory
Makes all math a simple code of 0s and 1s.
Polar Coding and Linear Decoding
Information Theory
Makes wireless signals more reliable and secure.