Performance Optimization of Energy-Harvesting Underlay Cognitive Radio Networks Using Reinforcement Learning
By: Deemah H. Tashman, Soumaya Cherkaoui, Walaa Hamouda
Potential Business Impact:
Helps phones use less power by smart energy choices.
In this paper, a reinforcement learning technique is employed to maximize the performance of a cognitive radio network (CRN). In the presence of primary users (PUs), it is presumed that two secondary users (SUs) access the licensed band within underlay mode. In addition, the SU transmitter is assumed to be an energy-constrained device that requires harvesting energy in order to transmit signals to their intended destination. Therefore, we propose that there are two main sources of energy; the interference of PUs' transmissions and ambient radio frequency (RF) sources. The SU will select whether to gather energy from PUs or only from ambient sources based on a predetermined threshold. The process of energy harvesting from the PUs' messages is accomplished via the time switching approach. In addition, based on a deep Q-network (DQN) approach, the SU transmitter determines whether to collect energy or transmit messages during each time slot as well as selects the suitable transmission power in order to maximize its average data rate. Our approach outperforms a baseline strategy and converges, as shown by our findings.
Similar Papers
Optimizing Cognitive Networks: Reinforcement Learning Meets Energy Harvesting Over Cascaded Channels
Emerging Technologies
Makes car radios safer from spies.
Maximizing Reliability in Overlay Radio Networks with Time Switching and Power Splitting Energy Harvesting
Emerging Technologies
Lets radios share airwaves without fighting.
Deep Reinforcement Learning for EH-Enabled Cognitive-IoT Under Jamming Attacks
Signal Processing
Protects smart devices from jamming and saves power.