Federated Cross-Modal Style-Aware Prompt Generation
By: Suraj Prasad , Navyansh Mahla , Sunny Gupta and more
Potential Business Impact:
Helps AI learn from private pictures better.
Prompt learning has propelled vision-language models like CLIP to excel in diverse tasks, making them ideal for federated learning due to computational efficiency. However, conventional approaches that rely solely on final-layer features miss out on rich multi-scale visual cues and domain-specific style variations in decentralized client data. To bridge this gap, we introduce FedCSAP (Federated Cross-Modal Style-Aware Prompt Generation). Our framework harnesses low, mid, and high-level features from CLIP's vision encoder alongside client-specific style indicators derived from batch-level statistics. By merging intricate visual details with textual context, FedCSAP produces robust, context-aware prompt tokens that are both distinct and non-redundant, thereby boosting generalization across seen and unseen classes. Operating within a federated learning paradigm, our approach ensures data privacy through local training and global aggregation, adeptly handling non-IID class distributions and diverse domain-specific styles. Comprehensive experiments on multiple image classification datasets confirm that FedCSAP outperforms existing federated prompt learning methods in both accuracy and overall generalization.
Similar Papers
FedDEAP: Adaptive Dual-Prompt Tuning for Multi-Domain Federated Learning
CV and Pattern Recognition
Helps AI learn from many pictures without sharing them.
FedMVP: Federated Multimodal Visual Prompt Tuning for Vision-Language Models
CV and Pattern Recognition
Teaches AI to learn new things better.
DSS-Prompt: Dynamic-Static Synergistic Prompting for Few-Shot Class-Incremental Learning
CV and Pattern Recognition
Teaches computers to learn new things without forgetting.