Balancing Fairness and Performance in Multi-User Spark Workloads with Dynamic Scheduling (extended version)
By: Dāvis Kažemaks , Laurens Versluis , Burcu Kulahcioglu Ozkan and more
Potential Business Impact:
Makes big computer jobs finish faster and fairer.
Apache Spark is a widely adopted framework for large-scale data processing. However, in industrial analytics environments, Spark's built-in schedulers, such as FIFO and fair scheduling, struggle to maintain both user-level fairness and low mean response time, particularly in long-running shared applications. Existing solutions typically focus on job-level fairness which unintentionally favors users who submit more jobs. Although Spark offers a built-in fair scheduler, it lacks adaptability to dynamic user workloads and may degrade overall job performance. We present the User Weighted Fair Queuing (UWFQ) scheduler, designed to minimize job response times while ensuring equitable resource distribution across users and their respective jobs. UWFQ simulates a virtual fair queuing system and schedules jobs based on their estimated finish times under a bounded fairness model. To further address task skew and reduce priority inversions, which are common in Spark workloads, we introduce runtime partitioning, a method that dynamically refines task granularity based on expected runtime. We implement UWFQ within the Spark framework and evaluate its performance using multi-user synthetic workloads and Google cluster traces. We show that UWFQ reduces the average response time of small jobs by up to 74% compared to existing built-in Spark schedulers and to state-of-the-art fair scheduling algorithms.
Similar Papers
Equinox: Holistic Fair Scheduling in Serving Large Language Models
Distributed, Parallel, and Cluster Computing
Makes AI answer questions faster and fairer.
QoS-Aware Proportional Fairness Scheduling for Multi-Flow 5G UEs: A Smart Factory Perspective
Networking and Internet Architecture
Makes factory machines work better together.
Dispatching Odyssey: Exploring Performance in Computing Clusters under Real-world Workloads
Distributed, Parallel, and Cluster Computing
Makes computers finish jobs faster by smarter organizing.