Communication-Pipelined Split Federated Learning for Foundation Model Fine-Tuning in UAV Networks
By: Zizhen Zhou , Ying-Chang Liang , Yanyu Cheng and more
Potential Business Impact:
Drones learn faster using less power.
Deploying foundation models (FMs) on uncrewed aerial vehicles (UAVs) promises broad ``low-altitude economy'' applications. Split federated learning (SFL)-based fine-tuning leverages distributed data while keeping raw data local and reduces client-side burden by partitioning the model between client and server. However, the per-round training latency is dominated by stragglers. Training paradigms featuring parallel gradient transmission (GT) allocate dedicated portions of downlink communication resources to each client. They may leave resources idle and suffer from prolonged GT latency, especially in UAV networks, where the communication latency typically far exceeds the computation latency. To address this, we propose a sequential GT paradigm, where the server dedicates all downlink resources for the current GT. We further propose communication-pipelined SFL (CPSFL), characterized by downlink GT priority scheduling and intra-round asynchronous training. We investigate CPSFL-based LoRA fine-tuning of FMs in UAV networks and formulate an optimization problem to minimize a weighted sum of per-round training latency and worst-case client energy consumption by optimizing the split point selection (SPS) and the computing and communication resource allocation (CCRA) (the uplink bandwidth allocation and the server computing frequency allocation). To solve this problem, we develop an attention-based deep reinforcement learning (DRL) framework, where the base station agent decides the split point and the CCRA in each round by leveraging previous round information, including UAV trajectories. Simulation results show that the proposed DRL-based CPSFL scheme outperforms the parallel GT benchmarks, the ablation variants, the fixed CCRA scheme, while approaching the best fixed-SPS scheme.
Similar Papers
Communication-and-Computation Efficient Split Federated Learning: Gradient Aggregation and Resource Management
Distributed, Parallel, and Cluster Computing
Makes AI learn faster with less data sent.
Split Federated Learning for UAV-Enabled Integrated Sensing, Computation, and Communication
Distributed, Parallel, and Cluster Computing
Drones learn faster, use less power, and protect privacy.
Federated Split Learning with Improved Communication and Storage Efficiency
Machine Learning (CS)
Trains AI smarter with less data sent.