On-Device Deep Reinforcement Learning for Decentralized Task Offloading Performance trade-offs in the training process
By: Gorka Nieto , Idoia de la Iglesia , Cristina Perfecto and more
Potential Business Impact:
Lets phones run big computer programs easily.
Allowing less capable devices to offload computational tasks to more powerful devices or servers enables the development of new applications that may not run correctly on the device itself. Deciding where and why to run each of those applications is a complex task. Therefore, different approaches have been adopted to make offloading decisions. In this work, we propose a decentralized Deep Reinforcement Learning (DRL) agent to address the selection of computing locations. Unlike most existing work, we analyze it in a real testbed composed of various edge devices running the agent to determine where to execute each task. These devices are connected to a Multi-Access Edge Computing (MEC) server and a Cloud server through 5G communications. We evaluate not only the agent's performance in meeting task requirements but also the implications of running this type of agent locally, assessing the trade-offs of training locally versus remotely in terms of latency and energy consumption.
Similar Papers
A Novel Deep Reinforcement Learning Method for Computation Offloading in Multi-User Mobile Edge Computing with Decentralization
Information Theory
Helps phones do hard tasks without slowing down.
Cooperative Task Offloading through Asynchronous Deep Reinforcement Learning in Mobile Edge Computing for Future Networks
Machine Learning (CS)
Makes phones faster and use less power.
Intelligent Offloading in Vehicular Edge Computing: A Comprehensive Review of Deep Reinforcement Learning Approaches and Architectures
Machine Learning (CS)
Helps smart cars send tasks to faster computers.