:
[SECURITY]

DEVELOPER CLAIMS GOOGLE'S AI WATERMARK SYSTEM CRACKED

AI DESKTUE, APR 14, 2026

■ AI-SUMMARIZED FROM 1 SOURCE BELOW

A software developer has published what they claim is a reverse-engineered version of Google DeepMind's SynthID watermarking system, allegedly allowing watermarks to be removed from AI-generated images or added to other works. Google disputes the claim.

The developer, operating under the GitHub username Aloshdenny, has open-sourced their work and documented a process they say requires only 200 Gemini-generated images, signal processing techniques, and significant free time to reverse-engineer the watermarking system. SynthID is Google DeepMind's watermarking tool designed to embed invisible digital signatures into AI-generated images. The watermark is intended to persist even after image compression or editing, marking content as artificially generated. According to Aloshdenny's documentation, the process involves analyzing multiple AI-generated images to identify patterns and extract the watermarking algorithm. The developer claims the method can both strip existing watermarks and insert watermarks into non-generated images. Google has pushed back against these claims. A spokesperson stated that the developer's assertions do not accurately reflect how SynthID functions or its robustness. Google emphasized that the watermarking system was designed with security considerations and is not easily compromised through the methods described. The dispute highlights ongoing tensions around AI-generated content authentication. Watermarking systems serve as a critical tool for tracking and identifying synthetic media as AI capabilities become more sophisticated. If such systems can be reliably circumvented, it could undermine efforts to maintain transparency about AI-generated content. SynthID has been available to select Google products and services. The company has positioned watermarking as part of broader efforts to responsibly develop AI, alongside other authentication methods. The open-source release of Aloshdenny's work will likely prompt security researchers and Google to examine the claims more thoroughly. Whether the reverse-engineering attempt succeeds or fails, it underscores the technical challenges in creating watermarking systems resilient enough to withstand dedicated analysis from determined developers.

■ SOURCES

The Verge

■ SUMMARY WRITTEN BY AI FROM THE LINKS ABOVE