Google launches tool to identify AI-generated images


Google is launching a beta version of SynthID, a tool that identifies and watermarks AI-generated images.

The tool will initially be available to a limited number of customers that use Imagen, Google’s cloud-based AI model for generating images from text.

Google SynthID places watermarks on AI-generated images

Watermarks are commonly used to show copyright ownership, but they can be easily cut out or edited with common image-editing tools.

SynthID embeds a digital watermark directly into the pixels of an image generated with Imagen, thus making it invisible to the human eye but detectable by using specific software.

“We designed SynthID so it doesn’t compromise image quality, and allows the watermark to remain detectable, even after modifications like adding filters, changing colours, and saving with various lossy compression schemes — most commonly used for JPEGs,” said Google’s researchers.

The tool uses two deep learning models that have been trained on a variety of images. One adds an imperceptible watermark and the other determines the likelihood of an image being created with Imagen by providing three confidence levels of image identification.

Source: Google DeepMind

While metadata – data that stores details about who created an image file and when – is also commonly used for identification, it can be altered or lost during editing. SynthID’s watermarking, integrated into the image pixels, remains detectable even when metadata is missing.

A race towards safe AI

“SynthID isn’t foolproof against extreme image manipulations, but it does provide a promising technical approach for empowering people and organisations to work with AI-generated content responsibly. This tool could also evolve alongside other AI models and modalities beyond imagery such as audio, video, and text,” the researchers noted.

Google believes that in the near future it could be expanded to other AI models, integrated into more Google products and made available to third parties.

Previously, the company joined several other US AI giants – Amazon, Anthropic, Inflection, Meta, Microsoft, and OpenAI – in publicly committing to develop safe, transparent and ethical AI technology. One of the things they committed to is to “develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated, including robust provenance, watermarking, or both, for AI-generated audio or visual content.”



Source link