When Should I Run My Application Benchmark?: Studying Cloud Performance Variability for the Case of Stream Processing Applications
By: Sören Henning , Adriano Vogel , Esteban Perez-Wohlfeil and more
Potential Business Impact:
Makes computer tests in the cloud more trustworthy.
Performance benchmarking is a common practice in software engineering, particularly when building large-scale, distributed, and data-intensive systems. While cloud environments offer several advantages for running benchmarks, it is often reported that benchmark results can vary significantly between repetitions -- making it difficult to draw reliable conclusions about real-world performance. In this paper, we empirically quantify the impact of cloud performance variability on benchmarking results, focusing on stream processing applications as a representative type of data-intensive, performance-critical system. In a longitudinal study spanning more than three months, we repeatedly executed an application benchmark used in research and development at Dynatrace. This allows us to assess various aspects of performance variability, particularly concerning temporal effects. With approximately 591 hours of experiments, deploying 789 Kubernetes clusters on AWS and executing 2366 benchmarks, this is likely the largest study of its kind and the only one addressing performance from an end-to-end, i.e., application benchmark perspective. Our study confirms that performance variability exists, but it is less pronounced than often assumed (coefficient of variation of < 3.7%). Unlike related studies, we find that performance does exhibit a daily and weekly pattern, although with only small variability (<= 2.5%). Re-using benchmarking infrastructure across multiple repetitions introduces only a slight reduction in result accuracy (<= 2.5 percentage points). These key observations hold consistently across different cloud regions and machine types with different processor architectures. We conclude that for engineers and researchers focused on detecting substantial performance differences (e.g., > 5%) in...
Similar Papers
Should I Run My Cloud Benchmark on Black Friday?
Software Engineering
Cloud computer speed changes, but not as much as thought.
Sampling in Cloud Benchmarking: A Critical Review and Methodological Guidelines
Distributed, Parallel, and Cluster Computing
Makes computer tests fairer and more trustworthy.
Towards an Optimized Benchmarking Platform for CI/CD Pipelines
Distributed, Parallel, and Cluster Computing
Find software problems faster, saving computer power.