Generating Dynamic Graph Algorithms for Multiple Backends for a Graph DSL
By: Nibedita Behera , Ashwina Kumar , Atharva Chougule and more
Potential Business Impact:
Makes computer programs handle changing information faster.
With the rapid growth of unstructured and semistructured data, parallelizing graph algorithms has become essential for efficiency. However, due to the inherent irregularity in computation, memory access patterns, and communication, graph algorithms are notoriously difficult to parallelize. To address this challenge, several libraries, frameworks, and domain-specific languages (DSLs) have been proposed to ease the parallel programming burden for domain experts. Existing frameworks partially or fully abstract away parallelism intricacies, provide intuitive scheduling mnemonics, and employ program analysis to identify data races and generate synchronization code. Despite these advances, most frameworks are limited in their abstractions and runtime optimizations, especially when dealing with static graphs. In contrast, many real-world graphs are inherently dynamic, with evolving structures over time through insertions, deletions, and modifications of vertices, edges, and attributes. Generating efficient and correctly synchronized code for such dynamic graph algorithms remains a significant challenge. In this work, we introduce an abstraction scheme and runtime optimizations for the efficient processing of morph algorithms. Specifically, given an initial graph G and a set of updates $\Delta$G involving edge insertions and deletions, we express the dynamic processing logic through a DSL and automatically generate parallel code targeting multicore, distributed, and many-core environments. We demonstrate the effectiveness of our approach by applying the DSL-generated code to ten large graphs with diverse characteristics and three widely used algorithms: Shortest Paths, PageRank, and Triangle Counting.
Similar Papers
Optimizations on Graph-Level for Domain Specific Computations in Julia and Application to QED
Distributed, Parallel, and Cluster Computing
Makes super-fast computers solve hard science problems.
Speculative Automated Refactoring of Imperative Deep Learning Programs to Graph Execution
Software Engineering
Makes computer learning programs run much faster.
Gradual Metaprogramming
Programming Languages
Finds coding mistakes earlier in data programs.