Learning to Write on Dirty Paper
By: Ezgi Ozyilkan, Oğuzhan Kubilay Ülger, Elza Erkip
Potential Business Impact:
Learns to cancel out noise in messages.
Dirty paper coding (DPC) is a classical problem in information theory that considers communication in the presence of channel state known only at the transmitter. While the theoretical impact of DPC has been substantial, practical realizations of DPC, such as Tomlinson-Harashima precoding (THP) or lattice-based schemes, often rely on specific modeling assumptions about the input, state and channel. In this work, we explore whether modern learning-based approaches can offer a complementary path forward by revisiting the DPC problem. We propose a data-driven solution in which both the encoder and decoder are parameterized by neural networks. Our proposed model operates without prior knowledge of the state (also referred to as "interference"), channel or input statistics, and recovers nonlinear mappings that yield effective interference pre-cancellation. To the best of our knowledge, this is the first interpretable proof-of-concept demonstrating that learning-based DPC schemes can recover characteristic features of well-established solutions, such as THP and lattice-based precoding, and outperform them in several regimes.
Similar Papers
Non-Linear Precoding via Dirty Paper Coding for Near-Field Downlink MISO Communications
Information Theory
Makes wireless internet faster for many people nearby.
Computing on Dirty Paper: Interference-Free Integrated Communication and Computing
Signal Processing
Lets devices send messages and compute at once.
Towards Scaling Deep Neural Networks with Predictive Coding: Theory and Practice
Machine Learning (CS)
Makes AI learn faster and use less power.