XAttnMark: Learning Robust Audio Watermarking with Cross-Attention

Authors: Yixin Liu, Lie Lu, Jihui Jin, Lichao Sun, Andrea Fanelli

Published: 2025-02-06 17:15:08+00:00

Comment: 24 pages, 10 figures

AI Summary

This paper introduces XAttnMark, a novel neural audio watermarking framework designed to achieve robust watermark detection and accurate message attribution simultaneously, addressing limitations of prior methods. It employs architectural innovations such as partial parameter sharing, a cross-attention mechanism for efficient message retrieval, and a temporal conditioning module for improved message distribution. XAttnMark also integrates a psychoacoustic-aligned temporal-frequency masking loss to enhance watermark imperceptibility, demonstrating state-of-the-art performance against a wide range of audio transformations, including challenging generative editing.

Abstract

The rapid proliferation of generative audio synthesis and editing technologies has raised significant concerns about copyright infringement, data provenance, and the spread of misinformation through deepfake audio. Watermarking offers a proactive solution by embedding imperceptible, identifiable, and traceable marks into audio content. While recent neural network-based watermarking methods like WavMark and AudioSeal have improved robustness and quality, they struggle to achieve both robust detection and accurate attribution simultaneously. This paper introduces Cross-Attention Robust Audio Watermark (XAttnMark), which bridges this gap by leveraging partial parameter sharing between the generator and the detector, a cross-attention mechanism for efficient message retrieval, and a temporal conditioning module for improved message distribution. Additionally, we propose a psychoacoustic-aligned temporal-frequency masking loss that captures fine-grained auditory masking effects, enhancing watermark imperceptibility. Our approach achieves state-of-the-art performance in both detection and attribution, demonstrating superior robustness against a wide range of audio transformations, including challenging generative editing with strong editing strength. The project webpage is available at https://liuyixin-louis.github.io/xattnmark/.


Key findings
XAttnMark achieves state-of-the-art performance with an average detection accuracy of 99.19% and attribution accuracy of 93% across diverse standard audio transformations. It demonstrates superior robustness against generative edits from models like AudioLDM2 and Stable Audio, maintaining 91-94% detection accuracy. The system also preserves high perceptual quality (PESQ 4.43, STOI 1.000) and stealthiness, with ablation studies confirming the vital roles of the cross-attention and temporal modulation modules.
Approach
XAttnMark leverages partial parameter sharing between the neural generator and detector, utilizing a shared embedding table and a cross-attention module for efficient message retrieval in the detector. A temporal conditioning module distributes the message across the temporal axis before injection. To enhance imperceptibility, it employs a psychoacoustic-aligned temporal-frequency masking loss that captures fine-grained auditory masking effects.
Datasets
VoxPopuli, LibriSpeech, MusicCap (MusicCaps), AudioSet, Free Music Archive (FMA-Large), AudioMarkBench (Common Voice), ASVspoof, MusicGen, MUSDB18
Model(s)
XAttnMark (a neural watermarking system built on convolutional encoder-decoder models using components from EnCodec, with a cross-attention module)
Author countries
USA