SparOA: Sparse and Operator-aware Hybrid Scheduling for Edge DNN Inference
By: Ziyang Zhang, Jie Liu, Luca Mottola
Potential Business Impact:
Makes smart devices run faster and use less power.
The resource demands of deep neural network (DNN) models introduce significant performance challenges, especially when deployed on resource-constrained edge devices. Existing solutions like model compression often sacrifice accuracy, while specialized hardware remains costly and inflexible. Hybrid inference methods, however, typically overlook how operator characteristics impact performance. In this work, we present SparOA, a CPU-GPU hybrid inference framework, which leverages both sparsity and computational intensity to optimize operator scheduling. SparOA embraces aforementioned challenges through three key components: (1) a threshold predictor that accurately determines optimal sparsity and computational intensity thresholds; (2) a reinforcement learning-based scheduler that dynamically optimizes resource allocation based on real-time hardware states; and (3) a hybrid inference engine that enhances efficiency through asynchronous execution and batch size optimization.Extensive results show that SparOA achieves an average speedup of 1.22-1.31x compared to all baselines, and outperforms the CPU-Only by up to 50.7x. Also, SparOA achieves optimal energy-per-inference, consuming 7\%-16\% less energy than the SOTA co-execution baseline.
Similar Papers
Dora: QoE-Aware Hybrid Parallelism for Distributed Edge AI
Distributed, Parallel, and Cluster Computing
Makes AI apps faster and use less power.
SPARTA: Advancing Sparse Attention in Spiking Neural Networks via Spike-Timing-Based Prioritization
Machine Learning (CS)
Makes computers smarter and faster using brain signals.
SPARTA: Advancing Sparse Attention in Spiking Neural Networks via Spike-Timing-Based Prioritization
Machine Learning (CS)
Makes computer brains learn faster by watching time.