SynthGuard: An Open Platform for Detecting AI-Generated Multimedia with Multimodal LLMs
Authors: Shail Desai, Aditya Pawar, Li Lin, Xin Wang, Shu Hu
Published: 2025-11-16 00:50:24+00:00
AI Summary
SynthGuard is introduced as an open, user-friendly platform designed for detecting and analyzing AI-generated multimedia, encompassing both images and audio. The platform leverages a modularized backend of traditional deepfake detectors and Multimodal Large Language Models (MLLMs) to provide transparent, explainable forensic analysis. This system aims to address the limitations of closed-source and modality-limited existing tools by offering accessible forensic research capabilities to the public.
Abstract
Artificial Intelligence (AI) has made it possible for anyone to create images, audio, and video with unprecedented ease, enriching education, communication, and creative expression. At the same time, the rapid rise of AI-generated media has introduced serious risks, including misinformation, identity misuse, and the erosion of public trust as synthetic content becomes increasingly indistinguishable from real media. Although deepfake detection has advanced, many existing tools remain closed-source, limited in modality, or lacking transparency and educational value, making it difficult for users to understand how detection decisions are made. To address these gaps, we introduce SynthGuard, an open, user-friendly platform for detecting and analyzing AI-generated multimedia using both traditional detectors and multimodal large language models (MLLMs). SynthGuard provides explainable inference, unified image and audio support, and an interactive interface designed to make forensic analysis accessible to researchers, educators, and the public. The SynthGuard platform is available at: https://in-engr-nova.it.purdue.edu/