Smark: A Watermark for Text-to-Speech Diffusion Models via Discrete Wavelet Transform
By: Yichuan Zhang, Chengxin Li, Yujie Gu
Text-to-Speech (TTS) diffusion models generate high-quality speech, which raises challenges for the model intellectual property protection and speech tracing for legal use. Audio watermarking is a promising solution. However, due to the structural differences among various TTS diffusion models, existing watermarking methods are often designed for a specific model and degrade audio quality, which limits their practical applicability. To address this dilemma, this paper proposes a universal watermarking scheme for TTS diffusion models, termed Smark. This is achieved by designing a lightweight watermark embedding framework that operates in the common reverse diffusion paradigm shared by all TTS diffusion models. To mitigate the impact on audio quality, Smark utilizes the discrete wavelet transform (DWT) to embed watermarks into the relatively stable low-frequency regions of the audio, which ensures seamless watermark-audio integration and is resistant to removal during the reverse diffusion process. Extensive experiments are conducted to evaluate the audio quality and watermark performance in various simulated real-world attack scenarios. The experimental results show that Smark achieves superior performance in both audio quality and watermark extraction accuracy.
Similar Papers
TriniMark: A Robust Generative Speech Watermarking Method for Trinity-Level Attribution
Multimedia
Marks fake voices so creators keep their work.
Watermarking Discrete Diffusion Language Models
Cryptography and Security
Marks AI writing so you know it's fake.
T2SMark: Balancing Robustness and Diversity in Noise-as-Watermark for Diffusion Models
CV and Pattern Recognition
Protects AI art by hiding secret codes in pictures.