SynergAI: Edge-to-Cloud Synergy for Architecture-Driven High-Performance Orchestration for AI Inference
By: Foteini Stathopoulou , Aggelos Ferikoglou , Manolis Katsaragakis and more
Potential Business Impact:
Makes AI work faster on different devices.
The rapid evolution of Artificial Intelligence (AI) and Machine Learning (ML) has significantly heightened computational demands, particularly for inference-serving workloads. While traditional cloud-based deployments offer scalability, they face challenges such as network congestion, high energy consumption, and privacy concerns. In contrast, edge computing provides low-latency and sustainable alternatives but is constrained by limited computational resources. In this work, we introduce SynergAI, a novel framework designed for performance- and architecture-aware inference serving across heterogeneous edge-to-cloud infrastructures. Built upon a comprehensive performance characterization of modern inference engines, SynergAI integrates a combination of offline and online decision-making policies to deliver intelligent, lightweight, and architecture-aware scheduling. By dynamically allocating workloads across diverse hardware architectures, it effectively minimizes Quality of Service (QoS) violations. We implement SynergAI within a Kubernetes-based ecosystem and evaluate its efficiency. Our results demonstrate that architecture-driven inference serving enables optimized and architecture-aware deployments on emerging hardware platforms, achieving an average reduction of 2.4x in QoS violations compared to a State-of-the-Art (SotA) solution.
Similar Papers
Scalability Optimization in Cloud-Based AI Inference Services: Strategies for Real-Time Load Balancing and Automated Scaling
Distributed, Parallel, and Cluster Computing
Makes AI services faster and use less power.
Dynamic Pricing for On-Demand DNN Inference in the Edge-AI Market
Artificial Intelligence
Smarter AI on phones, faster and cheaper.
AI Factories: It's time to rethink the Cloud-HPC divide
Distributed, Parallel, and Cluster Computing
Supercomputers become easier for AI to use.