Score: 1

Parallax: Runtime Parallelization for Operator Fallbacks in Heterogeneous Edge Systems

Published: December 12, 2025 | arXiv ID: 2512.11532v1

By: Chong Tang, Hao Dai, Jagmohan Chauhan

Potential Business Impact:

Makes phone apps run faster and use less power.

Business Areas:
PaaS Software

The growing demand for real-time DNN applications on edge devices necessitates faster inference of increasingly complex models. Although many devices include specialized accelerators (e.g., mobile GPUs), dynamic control-flow operators and unsupported kernels often fall back to CPU execution. Existing frameworks handle these fallbacks poorly, leaving CPU cores idle and causing high latency and memory spikes. We introduce Parallax, a framework that accelerates mobile DNN inference without model refactoring or custom operator implementations. Parallax first partitions the computation DAG to expose parallelism, then employs branch-aware memory management with dedicated arenas and buffer reuse to reduce runtime footprint. An adaptive scheduler executes branches according to device memory constraints, meanwhile, fine-grained subgraph control enables heterogeneous inference of dynamic models. By evaluating on five representative DNNs across three different mobile devices, Parallax achieves up to 46% latency reduction, maintains controlled memory overhead (26.5% on average), and delivers up to 30% energy savings compared with state-of-the-art frameworks, offering improvements aligned with the responsiveness demands of real-time mobile inference.

Country of Origin
🇬🇧 United Kingdom

Page Count
15 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing