Score: 1

MOLM: Mixture of LoRA Markers

Published: September 30, 2025 | arXiv ID: 2510.00293v1

By: Samar Fares , Nurbek Tastan , Noor Hussein and more

Potential Business Impact:

Marks AI pictures so we know who made them.

Business Areas:
Multi-level Marketing Sales and Marketing

Generative models can generate photorealistic images at scale. This raises urgent concerns about the ability to detect synthetically generated images and attribute these images to specific sources. While watermarking has emerged as a possible solution, existing methods remain fragile to realistic distortions, susceptible to adaptive removal, and expensive to update when the underlying watermarking key changes. We propose a general watermarking framework that formulates the encoding problem as key-dependent perturbation of the parameters of a generative model. Within this framework, we introduce Mixture of LoRA Markers (MOLM), a routing-based instantiation in which binary keys activate lightweight LoRA adapters inside residual and attention blocks. This design avoids key-specific re-training and achieves the desired properties such as imperceptibility, fidelity, verifiability, and robustness. Experiments on Stable Diffusion and FLUX show that MOLM preserves image quality while achieving robust key recovery against distortions, compression and regeneration, averaging attacks, and black-box adversarial attacks on the extractor.

Country of Origin
πŸ‡¦πŸ‡ͺ πŸ‡ΊπŸ‡Έ United States, United Arab Emirates

Page Count
21 pages

Category
Computer Science:
CV and Pattern Recognition