Score: 0

The Merit of Simple Policies: Buying Performance With Parallelism and System Architecture

Published: March 20, 2025 | arXiv ID: 2503.16166v1

By: Mert Yildiz, Alexey Rolich, Andrea Baiocchi

Potential Business Impact:

Makes computer jobs finish faster with smart server setups.

Business Areas:
Scheduling Information Technology, Software

While scheduling and dispatching of computational workloads is a well-investigated subject, only recently has Google provided publicly a vast high-resolution measurement dataset of its cloud workloads. We revisit dispatching and scheduling algorithms fed by traffic workloads derived from those measurements. The main finding is that mean job response time attains a minimum as the number of servers of the computing cluster is varied, under the constraint that the overall computational budget is kept constant. Moreover, simple policies, such as Join Idle Queue, appear to attain the same performance as more complex, size-based policies for suitably high degrees of parallelism. Further, better performance, definitely outperforming size-based dispatching policies, is obtained by using multi-stage server clusters, even using very simple policies such as Round Robin. The takeaway is that parallelism and architecture of computing systems might be powerful knobs to control performance, even more than policies, under realistic workload traffic.

Country of Origin
🇮🇹 Italy

Page Count
6 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing