VLCs: Managing Parallelism with Virtualized Libraries
By: Yineng Yan , William Ruys , Hochan Lee and more
Potential Business Impact:
Lets programs share computer parts without slowing down.
As the complexity and scale of modern parallel machines continue to grow, programmers increasingly rely on composition of software libraries to encapsulate and exploit parallelism. However, many libraries are not designed with composition in mind and assume they have exclusive access to all resources. Using such libraries concurrently can result in contention and degraded performance. Prior solutions involve modifying the libraries or the OS, which is often infeasible. We propose Virtual Library Contexts (VLCs), which are process subunits that encapsulate sets of libraries and associated resource allocations. VLCs control the resource utilization of these libraries without modifying library code. This enables the user to partition resources between libraries to prevent contention, or load multiple copies of the same library to allow parallel execution of otherwise thread-unsafe code within the same process. In this paper, we describe and evaluate C++ and Python prototypes of VLCs. Experiments show VLCs enable a speedup up to 2.85x on benchmarks including applications using OpenMP, OpenBLAS, and LibTorch.
Similar Papers
Library Liberation: Competitive Performance Matmul Through Compiler-composed Nanokernels
Machine Learning (CS)
Makes AI run faster on computers automatically.
Optimizing CPU Cache Utilization in Cloud VMs with Accurate Cache Abstraction
Distributed, Parallel, and Cluster Computing
Makes cloud computers run faster by managing memory better.
Optimizing CPU Cache Utilization in Cloud VMs with Accurate Cache Abstraction
Distributed, Parallel, and Cluster Computing
Makes cloud computers run faster by managing their memory better.