Multi-Task Lifelong Reinforcement Learning for Wireless Sensor Networks
By: Hossein Mohammadi Firouzjaei, Rafaela Scaciota, Sumudu Samarakoon
Potential Business Impact:
Makes wireless sensors use less power.
Enhancing the sustainability and efficiency of wireless sensor networks (WSN) in dynamic and unpredictable environments requires adaptive communication and energy harvesting strategies. We propose a novel adaptive control strategy for WSNs that optimizes data transmission and EH to minimize overall energy consumption while ensuring queue stability and energy storing constraints under dynamic environmental conditions. The notion of adaptability therein is achieved by transferring the known environment-specific knowledge to new conditions resorting to the lifelong reinforcement learning concepts. We evaluate our proposed method against two baseline frameworks: Lyapunov-based optimization, and policy-gradient reinforcement learning (RL). Simulation results demonstrate that our approach rapidly adapts to changing environmental conditions by leveraging transferable knowledge, achieving near-optimal performance approximately $30\%$ faster than the RL method and $60\%$ faster than the Lyapunov-based approach. The implementation is available at our GitHub repository for reproducibility purposes [1].
Similar Papers
Enhanced Evolutionary Multi-Objective Deep Reinforcement Learning for Reliable and Efficient Wireless Rechargeable Sensor Networks
Networking and Internet Architecture
Keeps sensors working longer without recharging.
Active management of battery degradation in wireless sensor network using deep reinforcement learning for group battery replacement
Machine Learning (CS)
Makes wireless sensors last longer for easier repairs.
Performance Optimization of Energy-Harvesting Underlay Cognitive Radio Networks Using Reinforcement Learning
Signal Processing
Helps phones use less power by smart energy choices.