LoRAShield: Data-Free Editing Alignment for Secure Personalized LoRA Sharing
By: Jiahao Chen , junhao li , Yiming Wang and more
Potential Business Impact:
Stops bad pictures from being made with AI.
The proliferation of Low-Rank Adaptation (LoRA) models has democratized personalized text-to-image generation, enabling users to share lightweight models (e.g., personal portraits) on platforms like Civitai and Liblib. However, this "share-and-play" ecosystem introduces critical risks: benign LoRAs can be weaponized by adversaries to generate harmful content (e.g., political, defamatory imagery), undermining creator rights and platform safety. Existing defenses like concept-erasure methods focus on full diffusion models (DMs), neglecting LoRA's unique role as a modular adapter and its vulnerability to adversarial prompt engineering. To bridge this gap, we propose LoRAShield, the first data-free editing framework for securing LoRA models against misuse. Our platform-driven approach dynamically edits and realigns LoRA's weight subspace via adversarial optimization and semantic augmentation. Experimental results demonstrate that LoRAShield achieves remarkable effectiveness, efficiency, and robustness in blocking malicious generations without sacrificing the functionality of the benign task. By shifting the defense to platforms, LoRAShield enables secure, scalable sharing of personalized models, a critical step toward trustworthy generative ecosystems.
Similar Papers
Cross-LoRA: A Data-Free LoRA Transfer Framework across Heterogeneous LLMs
Machine Learning (CS)
Moves AI skills between different computer brains.
TeleLoRA: Teleporting Model-Specific Alignment Across LLMs
Machine Learning (CS)
Cleans harmful secrets from AI without retraining.
AutoLoRA: Automatic LoRA Retrieval and Fine-Grained Gated Fusion for Text-to-Image Generation
CV and Pattern Recognition
Lets computers create many different pictures easily.