MIRGE: An Array-Based Computational Framework for Scientific Computing
By: Matthias Diener , Matthew J. Smith , Michael T. Campbell and more
MIRGE is a computational approach for scientific computing based on NumPy-like array computation, but using lazy evaluation to recast computation as data-flow graphs, where nodes represent immutable, multi-dimensional arrays. Evaluation of an array expression is deferred until its value is needed, at which point a pipeline is invoked that transforms high-level array expressions into lower-level intermediate representations (IR) and finally into executable code, through a multi-stage process. Domain-specific transformations, such as metadata-driven optimizations, GPU-parallelization strategies, and loop fusion techniques, improve performance and memory efficiency. MIRGE employs "array contexts" to abstract the interface between array expressions and heterogeneous execution environments (for example, lazy evaluation via OpenCL, or eager evaluation via NumPy or CuPy). The framework thus enables performance portability as well as separation of concerns between application logic, low-level implementation, and optimizations. By enabling scientific expressivity while facilitating performance tuning, MIRGE offers a robust, extensible platform for both computational research and scientific application development. This paper provides an overview of MIRGE. We further describe an application of MIRGE called MIRGE-Com, for supersonic combusting flows in a discontinuous Galerkin finite-element setting. We demonstrate its capabilities as a solver and highlight its performance characteristics on large-scale GPU hardware.
Similar Papers
MIRAGE: Scaling Test-Time Inference with Parallel Graph-Retrieval-Augmented Reasoning Chains
Computation and Language
Helps doctors answer hard medical questions better.
MIRAGE: A Multi-modal Benchmark for Spatial Perception, Reasoning, and Intelligence
CV and Pattern Recognition
Helps computers understand how things are placed.
MIRAGE: A Metric-Intensive Benchmark for Retrieval-Augmented Generation Evaluation
Computation and Language
Tests how AI uses outside information to answer questions.