Score: 0

Yours or Mine? Overwriting Attacks against Neural Audio Watermarking

Published: September 6, 2025 | arXiv ID: 2509.05835v1

By: Lingfeng Yao , Chenpei Huang , Shengyao Wang and more

Potential Business Impact:

Breaks AI audio watermarks, making them fake.

Business Areas:
Fraud Detection Financial Services, Payments, Privacy and Security

As generative audio models are rapidly evolving, AI-generated audios increasingly raise concerns about copyright infringement and misinformation spread. Audio watermarking, as a proactive defense, can embed secret messages into audio for copyright protection and source verification. However, current neural audio watermarking methods focus primarily on the imperceptibility and robustness of watermarking, while ignoring its vulnerability to security attacks. In this paper, we develop a simple yet powerful attack: the overwriting attack that overwrites the legitimate audio watermark with a forged one and makes the original legitimate watermark undetectable. Based on the audio watermarking information that the adversary has, we propose three categories of overwriting attacks, i.e., white-box, gray-box, and black-box attacks. We also thoroughly evaluate the proposed attacks on state-of-the-art neural audio watermarking methods. Experimental results demonstrate that the proposed overwriting attacks can effectively compromise existing watermarking schemes across various settings and achieve a nearly 100% attack success rate. The practicality and effectiveness of the proposed overwriting attacks expose security flaws in existing neural audio watermarking systems, underscoring the need to enhance security in future audio watermarking designs.

Page Count
8 pages

Category
Computer Science:
Cryptography and Security