Proto-LeakNet: Towards Signal-Leak Aware Attribution in Synthetic Human Face Imagery
Authors: Claudio Giusti, Luca Guarnera, Sebastiano Battiato
Published: 2025-11-06 10:51:11+00:00
AI Summary
Proto-LeakNet is introduced as an interpretable attribution framework for synthetic human face imagery, designed to exploit generator-specific statistical traces, known as signal leaks, embedded in the latent space of diffusion models. It combines closed-set classification with a density-based open-set evaluation on the learned embeddings, enabling robust attribution and generalization to unseen generators. The approach achieves high Macro AUC and remains robust against various post-processing degradations.
Abstract
The growing sophistication of synthetic image and deepfake generation models has turned source attribution and authenticity verification into a critical challenge for modern computer vision systems. Recent studies suggest that diffusion pipelines unintentionally imprint persistent statistical traces, known as signal leaks, within their outputs, particularly in latent representations. Building on this observation, we propose Proto-LeakNet, a signal-leak-aware and interpretable attribution framework that integrates closed-set classification with a density-based open-set evaluation on the learned embeddings, enabling analysis of unseen generators without retraining. Operating in the latent domain of diffusion models, our method re-simulates partial forward diffusion to expose residual generator-specific cues. A temporal attention encoder aggregates multi-step latent features, while a feature-weighted prototype head structures the embedding space and enables transparent attribution. Trained solely on closed data and achieving a Macro AUC of 98.13%, Proto-LeakNet learns a latent geometry that remains robust under post-processing, surpassing state-of-the-art methods, and achieves strong separability between known and unseen generators. These results demonstrate that modeling signal-leak bias in latent space enables reliable and interpretable AI-image and deepfake forensics. The code for the whole work will be available upon submission.