Parallax: Runtime Parallelization for Operator Fallbacks in Heterogeneous Edge Systems
By: Chong Tang, Hao Dai, Jagmohan Chauhan
Potential Business Impact:
Makes phone apps run faster and use less power.
The growing demand for real-time DNN applications on edge devices necessitates faster inference of increasingly complex models. Although many devices include specialized accelerators (e.g., mobile GPUs), dynamic control-flow operators and unsupported kernels often fall back to CPU execution. Existing frameworks handle these fallbacks poorly, leaving CPU cores idle and causing high latency and memory spikes. We introduce Parallax, a framework that accelerates mobile DNN inference without model refactoring or custom operator implementations. Parallax first partitions the computation DAG to expose parallelism, then employs branch-aware memory management with dedicated arenas and buffer reuse to reduce runtime footprint. An adaptive scheduler executes branches according to device memory constraints, meanwhile, fine-grained subgraph control enables heterogeneous inference of dynamic models. By evaluating on five representative DNNs across three different mobile devices, Parallax achieves up to 46% latency reduction, maintains controlled memory overhead (26.5% on average), and delivers up to 30% energy savings compared with state-of-the-art frameworks, offering improvements aligned with the responsiveness demands of real-time mobile inference.
Similar Papers
Parallax: Efficient LLM Inference Service over Decentralized Environment
Distributed, Parallel, and Cluster Computing
Shares computer power to run AI faster.
Accelerating Mobile Inference through Fine-Grained CPU-GPU Co-Execution
Machine Learning (CS)
Lets phones run smart programs much faster.
Joint Partitioning and Placement of Foundation Models for Real-Time Edge AI
Distributed, Parallel, and Cluster Computing
Lets AI work better on phones and other devices.