Fulcrum: Optimizing Concurrent DNN Training and Inferencing on Edge Accelerators
By: Prashanthi S. K. , Saisamarth Taluri , Pranav Gupta and more
Potential Business Impact:
Lets smart devices run two jobs at once.
The proliferation of GPU accelerated edge devices like Nvidia Jetsons and the rise in privacy concerns are placing an emphasis on concurrent DNN training and inferencing on edge devices. Inference and training have different computing and QoS goals. But edge accelerators like Jetson do not support native GPU sharing and expose 1000s of power modes. This requires careful time-sharing of concurrent workloads to meet power--performance goals, while limiting costly profiling. In this paper, we design an intelligent time-slicing approach for concurrent DNN training and inferencing on Jetsons. We formulate an optimization problem to interleave training and inferencing minibatches, and decide the device power mode and inference minibatch size, while maximizing the training throughput and staying within latency and power budgets, with modest profiling costs. We propose GMD, an efficient multi-dimensional gradient descent search which profiles just $15$ power modes; and ALS, an Active Learning technique which identifies reusable Pareto-optimal power modes, but profiles $50$--$150$ power modes. We evaluate these within our Fulcrum scheduler for $273,000+$ configurations across $15$ DNN workloads. We also evaluate our strategies on dynamic arrival inference and concurrent inferences. ALS and GMD outperform simpler and more complex baselines with larger-scale profiling. Their solutions satisfy the latency and power budget for $>97\%$ of our runs, and on average are within $7\%$ of the optimal throughput.
Similar Papers
Pagoda: An Energy and Time Roofline Study for DNN Workloads on Edge Accelerators
Distributed, Parallel, and Cluster Computing
Makes AI run faster and use less power.
Characterizing the Performance of Accelerated Jetson Edge Devices for Training Deep Learning Models
Distributed, Parallel, and Cluster Computing
Trains smart computer programs on small gadgets.
Evaluating Multi-Instance DNN Inferencing on Multiple Accelerators of an Edge Device
Distributed, Parallel, and Cluster Computing
Makes smart devices run faster using all their parts.