Learning Binary Autoencoder-Based Codes with Progressive Training
By: Vukan Ninkovic, Dejan Vukobratovic
Potential Business Impact:
AI learns to send perfect messages, even with noise.
Error correcting codes play a central role in digital communication, ensuring that transmitted information can be accurately reconstructed despite channel impairments. Recently, autoencoder (AE) based approaches have gained attention for the end-to-end design of communication systems, offering a data driven alternative to conventional coding schemes. However, enforcing binary codewords within differentiable AE architectures remains difficult, as discretization breaks gradient flow and often leads to unstable convergence. To overcome this limitation, a simplified two stage training procedure is proposed, consisting of a continuous pretraining phase followed by direct binarization and fine tuning without gradient approximation techniques. For the (7,4) block configuration over a binary symmetric channel (BSC), the learned encoder-decoder pair learns a rotated version (coset code) of the optimal Hamming code, naturally recovering its linear and distance properties and thereby achieving the same block error rate (BLER) with maximum likelihood (ML) decoding. These results indicate that compact AE architectures can effectively learn structured, algebraically optimal binary codes through stable and straightforward training.
Similar Papers
Structured Superposition of Autoencoders for UEP Codes at Intermediate Blocklengths
Information Theory
Makes messages send reliably, even with errors.
Machine learning discovers new champion codes
Information Theory
Finds better ways to fix digital mistakes.
TransCoder: A Neural-Enhancement Framework for Channel Codes
Information Theory
Makes wireless messages clearer, even with bad signals.