Machine Learning and CPU (Central Processing Unit) Scheduling Co-Optimization over a Network of Computing Centers
By: Mohammadreza Doostmohammadian, Zulfiya R. Gabidullina, Hamid R. Rabiee
Potential Business Impact:
Makes computers learn faster using less power.
In the rapidly evolving research on artificial intelligence (AI) the demand for fast, computationally efficient, and scalable solutions has increased in recent years. The problem of optimizing the computing resources for distributed machine learning (ML) and optimization is considered in this paper. Given a set of data distributed over a network of computing-nodes/servers, the idea is to optimally assign the CPU (central processing unit) usage while simultaneously training each computing node locally via its own share of data. This formulates the problem as a co-optimization setup to (i) optimize the data processing and (ii) optimally allocate the computing resources. The information-sharing network among the nodes might be time-varying, but with balanced weights to ensure consensus-type convergence of the algorithm. The algorithm is all-time feasible, which implies that the computing resource-demand balance constraint holds at all iterations of the proposed solution. Moreover, the solution allows addressing possible log-scale quantization over the information-sharing channels to exchange log-quantized data. For some example applications, distributed support-vector-machine (SVM) and regression are considered as the ML training models. Results from perturbation theory, along with Lyapunov stability and eigen-spectrum analysis, are used to prove the convergence towards the optimal case. As compared to existing CPU scheduling solutions, the proposed algorithm improves the cost optimality gap by more than $50\%$.
Similar Papers
Machine learning-based cloud resource allocation algorithms: a comprehensive comparative review
Distributed, Parallel, and Cluster Computing
Makes computers use cloud power smarter and cheaper.
Intelligent Resource Allocation Optimization for Cloud Computing via Machine Learning
Distributed, Parallel, and Cluster Computing
Makes computer clouds work smarter and cheaper.
Accelerating Mobile Inference through Fine-Grained CPU-GPU Co-Execution
Machine Learning (CS)
Lets phones run smart programs much faster.