Cornserve: Efficiently Serving Any-to-Any Multimodal Models
By: Jeff J. Ma , Jae-Won Chung , Jisang Ahn and more
We present Cornserve, an efficient online serving system for an emerging class of multimodal models called Any-to-Any models. Any-to-Any models accept combinations of text and multimodal data (e.g., image, video, audio) as input and also generate combinations of text and multimodal data as output, introducing request type, computation path, and computation scaling heterogeneity in model serving. Cornserve allows model developers to describe the computation graph of generic Any-to-Any models, which consists of heterogeneous components such as multimodal encoders, autoregressive models like Large Language Models (LLMs), and multimodal generators like Diffusion Transformers (DiTs). Given this, Cornserve's planner automatically finds an optimized deployment plan for the model, including whether and how to disaggregate the model into smaller components based on model and workload characteristics. Cornserve's distributed runtime then executes the model per the plan, efficiently handling Any-to-Any model heterogeneity during online serving. Evaluations show that Cornserve can efficiently serve diverse Any-to-Any models and workloads, delivering up to 3.81$\times$ throughput improvement and up to 5.79$\times$ tail latency reduction over existing solutions.
Similar Papers
Cornstarch: Distributed Multimodal Training Must Be Multimodality-Aware
Distributed, Parallel, and Cluster Computing
Trains smart AI models that understand pictures and words.
AugServe: Adaptive Request Scheduling for Augmented Large Language Model Inference Serving
Computation and Language
Makes AI answer questions much faster.
RServe: Overlapping Encoding and Prefill for Efficient LMM Inference
Distributed, Parallel, and Cluster Computing
Makes AI understand pictures and words faster.