Score: 1

LobRA: Multi-tenant Fine-tuning over Heterogeneous Data

Published: September 1, 2025 | arXiv ID: 2509.01193v1

By: Sheng Lin , Fangcheng Fu , Haoyang Li and more

Potential Business Impact:

Trains AI faster with less computer power.

Business Areas:
Big Data Data and Analytics

With the breakthrough of Transformer-based pre-trained models, the demand for fine-tuning (FT) to adapt the base pre-trained models to downstream applications continues to grow, so it is essential for service providers to reduce the cost of processing FT requests. Low-rank adaption (LoRA) is a widely used FT technique that only trains small-scale adapters and keeps the base model unaltered, conveying the possibility of processing multiple FT tasks by jointly training different LoRA adapters with a shared base model. Nevertheless, through in-depth analysis, we reveal the efficiency of joint FT is dampened by two heterogeneity issues in the training data -- the sequence length variation and skewness. To tackle these issues, we develop LobRA, a brand new framework that supports processing multiple FT tasks by jointly training LoRA adapters. Two innovative designs are introduced. Firstly, LobRA deploys the FT replicas (i.e., model replicas for FT) with heterogeneous resource usages and parallel configurations, matching the diverse workloads caused by the sequence length variation. Secondly, for each training step, LobRA takes account of the sequence length skewness and dispatches the training data among the heterogeneous FT replicas to achieve workload balance. We conduct experiments to assess the performance of LobRA, validating that it significantly reduces the GPU seconds required for joint FT by 45.03%-60.67%.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing