Score: 0

AnchorTP: Resilient LLM Inference with State-Preserving Elastic Tensor Parallelism

Published: November 5, 2025 | arXiv ID: 2511.11617v1

By: Wendong Xu , Chujie Chen , He Xiao and more

Potential Business Impact:

Keeps AI running smoothly even if a part breaks.

Business Areas:
Table Tennis Sports

Large Language Model (LLM) inference services demand exceptionally high availability and low latency, yet multi-GPU Tensor Parallelism (TP) makes them vulnerable to single-GPU failures. We present AnchorTP, a state-preserving elastic TP framework for fast recovery. It (i) enables Elastic Tensor Parallelism (ETP) with unequal-width partitioning over any number of GPUs and compatibility with Mixture-of-Experts (MoE), and (ii) preserves model parameters and KV caches in GPU memory via a daemon decoupled from the inference process. To minimize downtime, we propose a bandwidth-aware planner based on a Continuous Minimal Migration (CMM) algorithm that minimizes reload bytes under a byte-cost dominance assumption, and an execution scheduler that pipelines P2P transfers with reloads. These components jointly restore service quickly with minimal data movement and without changing service interfaces. In typical failure scenarios, AnchorTP reduces Time to First Success (TFS) by up to 11x and Time to Peak (TTP) by up to 59% versus restart-and-reload.

Country of Origin
🇭🇰 Hong Kong

Page Count
8 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing