AI-generated image watermarks can be easily removed, say researchers

AI-generated image watermarks can be easily removed, say researchers

Now that AI can make fake images that look real, how can we know what’s legitimate and what isn’t? One of the primary ways has been the use of defensive watermarking, which means embedding invisible markers in AI-generated images to show they were made up. Now, researchers have broken that technology.

Generative AI isn’t just for writing emails or suggesting recipes. It can generate entire images from scratch. While most people use that for fun (making cartoons of your dog) or practicality (envisioning a woodworking project, say) some use it irresponsibly. One example is creating images that look like real creators’ content (producing an image ‘in the style of’ a particular artist).

Another is using it for misinformation, either intentionally or unintentionally. This image-based misinformation has grown exponentially in an AI-powered world, according to Google researchers. Misinformation can be playful or experimental, such as Katy Perry’s deepfake attendance at the Met Gala, and the puffer jacket Pope. But it can also be harmful, putting real people in situations that they didn’t consent to, creating false narratives for ideological, financial, or other purposes.

In the early days of AI image generation, people could recognize the fakes themselves. People in pictures having the wrong number of fingers was one giveaway, as were body parts like hands and arms that didn’t fit together well, especially when people were pictured close together. As AI generation got better, we could still rely on programs to detect small inconsistencies in the images. But those fake images get more convincing every day.

Generative AI companies have been taking action to stop this. OpenAI, Google, and others committed to embedding watermarks in their AI-generated images. These are digital fingerprints, invisible to the naked eye but easily detectable by software, that prove an image was generated by AI and therefore not real.

Now, researchers at the University of Waterloo in Canada have worked out a way to subvert this defensive watermarking. Andre Kassis and Urs Hengartner at the University’s Cheriton School of Computer Science have created a tool called UnMarker.

UnMarker removes those watermarks from images, making it impossible for watermark detectors to determine that an image has been artificially generated. The scientists say that the tool is universal, defeating all watermarking schemes. These include semantic watermarks, which alter the structure of the image itself. These are more deeply embedded in an image, and traditionally tougher to counter.

The tool capitalizes on two fundamental needs for watermarking tools. The first is that they mustn’t degrade the quality of the image. The second is that they must be immune to manipulation such as cropping. That means watermarks are restricted in how they can alter an image. They have to focus on shifting the intensity of pixels in the picture.

Relying on this fact, Kassis and Hengartner’s tool analyzes the frequency of pixels in an image to see if anything is unusual. If it finds an anomaly, it uses that as a sign that there’s a watermark. It then rearranges the pixel frequency across the image so that it won’t trigger a watermark detector.

UnMarker, which the researchers have released publicly, works without any access to the AI algorithm’s internal workings. Neither does it need any other data to work, they add. It’s a ‘black box’ mechanism. You can just run it as a watermark eraser.

It’s not perfect, but it reduces the best detection rate to 43%, even on semantic watermarks. That means you can’t trust the detection tool’s results.

“Our findings show that defensive watermarking is not a viable defense against deepfakes, and we urge the community to explore alternatives,” the researchers said in their paper.

So the battle to fight misinformation continues. Now it’s up to watermark designers to up the ante or develop another method to flag deepfakes. We’re not sure that this cat and mouse game will ever end.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.


Source link