Power Control Based on Multi-Agent Deep Q Network for D2D Communication
By: Shi Gengtian , Takashi Koshimizu , Megumi Saito and more
Potential Business Impact:
Makes phones share airwaves without messing up calls.
In device-to-device (D2D) communication under a cell with resource sharing mode the spectrum resource utilization of the system will be improved. However, if the interference generated by the D2D user is not controlled, the performance of the entire system and the quality of service (QOS) of the cellular user may be degraded. Power control is important because it helps to reduce interference in the system. In this paper, we propose a reinforcement learning algorithm for adaptive power control that helps reduce interference to increase system throughput. Simulation results show the proposed algorithm has better performance than traditional algorithm in LTE (Long Term Evolution).
Similar Papers
Deep Q-Learning-Driven Power Control for Enhanced Noma User Performance
Information Theory
Drones boost slow internet for people far away.
Power Allocation for Delay Optimization in Device-to-Device Networks: A Graph Reinforcement Learning Approach
Systems and Control
Makes phones send data faster, fairly.
Channel, Mode and Power Optimization for non-Orthogonal D2D Communications: a Hybrid Approach
Networking and Internet Architecture
Lets phones talk directly, saving network power.