MOLM: Mixture of LoRA Markers
By: Samar Fares , Nurbek Tastan , Noor Hussein and more
Potential Business Impact:
Marks AI pictures so we know who made them.
Generative models can generate photorealistic images at scale. This raises urgent concerns about the ability to detect synthetically generated images and attribute these images to specific sources. While watermarking has emerged as a possible solution, existing methods remain fragile to realistic distortions, susceptible to adaptive removal, and expensive to update when the underlying watermarking key changes. We propose a general watermarking framework that formulates the encoding problem as key-dependent perturbation of the parameters of a generative model. Within this framework, we introduce Mixture of LoRA Markers (MOLM), a routing-based instantiation in which binary keys activate lightweight LoRA adapters inside residual and attention blocks. This design avoids key-specific re-training and achieves the desired properties such as imperceptibility, fidelity, verifiability, and robustness. Experiments on Stable Diffusion and FLUX show that MOLM preserves image quality while achieving robust key recovery against distortions, compression and regeneration, averaging attacks, and black-box adversarial attacks on the extractor.
Similar Papers
AuthenLoRA: Entangling Stylization with Imperceptible Watermarks for Copyright-Secure LoRA Adapters
Cryptography and Security
Marks AI art so you know who made it.
Mitigating Watermark Forgery in Generative Models via Multi-Key Watermarking
Cryptography and Security
Stops fake AI content from being trusted.
Watermarks for Language Models via Probabilistic Automata
Cryptography and Security
Makes AI writing harder to fake.