Forward-Forward Autoencoder Architectures for Energy-Efficient Wireless Communications
By: Daniel Seifert, Onur Günlü, Rafael F. Schaefer
Potential Business Impact:
Makes computer learning faster and use less power.
The application of deep learning to the area of communications systems has been a growing field of interest in recent years. Forward-forward (FF) learning is an efficient alternative to the backpropagation (BP) algorithm, which is the typically used training procedure for neural networks. Among its several advantages, FF learning does not require the communication channel to be differentiable and does not rely on the global availability of partial derivatives, allowing for an energy-efficient implementation. In this work, we design end-to-end learned autoencoders using the FF algorithm and numerically evaluate their performance for the additive white Gaussian noise and Rayleigh block fading channels. We demonstrate their competitiveness with BP-trained systems in the case of joint coding and modulation, and in a scenario where a fixed, non-differentiable modulation stage is applied. Moreover, we provide further insights into the design principles of the FF network, its training convergence behavior, and significant memory and processing time savings compared to BP-based approaches.
Similar Papers
Scalable Forward-Forward Algorithm
Machine Learning (CS)
Trains computer brains without needing to go backward.
Beyond Backpropagation: Exploring Innovative Algorithms for Energy-Efficient Deep Neural Network Training
Machine Learning (CS)
Trains AI faster and uses less power.
NetworkFF: Unified Layer Optimization in Forward-Only Neural Networks
Machine Learning (CS)
Makes AI learn better by sharing information between layers.