Google’s DeepMind Unveils Invisible Watermark to Spot AI-Generated Images
Google’s DeepMind Unveils Invisible Watermark to Spot AI-Generated Images

Google’s DeepMind Unveils Invisible Watermark to Spot AI-Generated Images

Google's DeepMind Unveils Invisible Watermark to Spot AI-Generated Images

As AI image generators increase in popularity, differentiating between authentic and AI-created images is becoming more complex. DeepMind, Google's AI unit, is addressing this by developing an imperceptible watermark known as SynthID for its AI-generated images to counter misinformation.

SynthID generates an imperceptible digital watermark for AI-generated images.

Why this matters:

  • DeepMind's SynthID tags AI-generated images: Invisible to people but detectable by computers, this watermark hopes to aid in the verification of images.
  • Technology, however, isn't completely foolproof: DeepMind itself acknowledges that intense image manipulation could compromise the watermark.
  • Google's image generator, Imagen, will only apply to images created using this tool: Google aims to instantly identify AI-generated images with this effectively hidden watermark.

DeepMind's head of research, Pushmeet Kohli, shared the following details:

  • The watermark changes on images are so subtle that humans wouldn't notice, yet DeepMind can still detect an AI-generated image.
  • Despite any subsequent cropping or editing, the watermark remains identifiable by DeepMind's software. Colors, contrast, or size changes won't affect it.

Calls for a standard approach to AI-generated image identification continue:

  • More coordination between businesses is crucial, different methods adopted by various firms add degrees of complexity in tagging AI content.
  • Other tech giants, including Microsoft and Amazon, pledge to watermark some AI content, meeting similar demands for transparency over AI-generated works.

(source)

submitted by /u/AIsupercharged
[link] [comments]