Sunday, Dec 22, 2024

Google made a watermark for AI images that you can’t edit out

With SynthID, you should be able to edit a photo all you want without destroying the AI watermark. | Image: Google

The Google DeepMind team has believed for years that building great generative AI tools also requires building great tools to detect what has been created by AI. There are plenty of obvious, high-stakes reasons why, says Google DeepMind CEO Demis Hassabis. “Every time we talk about it and other systems, it’s, ‘What about the problem of deepfakes?’” With another contentious election season coming in 2024 in both the US and the UK, Hassabis says that building systems to identify and detect AI imagery is more important all the time.

Hassabis and his team have been working on a tool for the last few years, which Google is releasing publicly today. It’s called SynthID, and it’s designed to essentially watermark an AI-generated image in a way that is imperceptible to the human eye but easily caught by a dedicated AI detection tool.

The watermark is embedded in the pixels of the image, but Hassabis says it doesn’t alter the image itself in any noticeable way. “It doesn’t change the image, the quality of the image, or the experience of it,” he says. “But it’s robust to various transformations — cropping, resizing, all of the things that you might do to try and get around normal, traditional, simple watermarks.” As SynthID’s underlying models improve, Hassabis says, the watermark will be even less perceptible to humans but even more easily detected by DeepMind’s tools.

That’s as technical as Hassabis and Google DeepMind want to be for now. Even the launch blog post is sparse on details because SynthID is still a new system. “The more you reveal about the way it works, the easier it’ll be for hackers and nefarious entities to get around it,” Hassabis says. SynthID is rolling out first in a Google-centric way: Google Cloud customers who use the company’s Vertex AI platform and the Imagen image generator will be able to embed and detect the watermark. As the system gets more real-world testing, Hassabis hopes it’ll get better. Then Google will be able to use it in more places, share more about how it works, and get even more data on how it works.


A picture with three text options showing whether SynthID thinks an image was made with AI.
Image: Google
Google’s SynthID tools will tell you how likely it is an image was AI-generated.

Eventually, Hassabis seems to hope SynthID might be something like an internet-wide standard. The foundational ideas could even be used in other media like video and text. Once Google has proven the tech, “the question is scaling it up, sharing it with other partners that want it, and then scaling up the consumer solution — and then having that debate with civil society about where we want to take this.” He says over and over that this is a beta test, a first try at a new thing, “and not a silver bullet to the deepfake problem.” But he clearly thinks it could be huge.

Of course, Google’s not the only company with this particular ambition. Far from it. Just last month, Meta, OpenAI, Google, and several of the other biggest names in AI promised to build in more protections and safety systems for their AI. A number of companies are also working with a protocol called C2PA, which uses cryptographic metadata to tag AI-generated content. Google is, in many ways, playing catch-up on all its AI tools, including detection. And it seems likely that we’re going to get too many AI-detection standards before we get the ones that actually work. But Hassabis is confident that watermarking is at least going to be part of the answer around the web.

SynthID is launching during Google’s Cloud Next conference, in which the company tells its business customers about new features in Google’s Cloud and Workspace products. Thomas Kurian, Google Cloud’s CEO, says that usage of the Vertex AI platform is absolutely exploding: “The models are getting more and more sophisticated, and we’ve had a huge, huge ramp in the number of people using the models.” That growth and the improvement in the SynthID system made Kurian and Hassabis feel this was the time to launch.

Customers are definitely worried about deepfakes, Kurian says, but they also have much more mundane AI detection needs. “We have a lot of customers who use these tools to create images for ad copy,” he says by way of example, “and they want to verify the original image because many times the marketing department has a central team that actually creates the original blueprint image.” Retail is another big one: some retailers are using AI tools to generate descriptions for their huge catalog of products, and they need to make sure the product photos they’re uploading don’t get mixed up with the generated images they’re using for brainstorming and iteration purposes. (You might already be seeing DeepMind-created descriptions like this, by the way, both on retail websites and in places like YouTube Shorts.) They may not be quite as viscerally important as fake Trump mug shots or a swagged-out pope, but these are the ways AI is already showing up in day-to-day business.

One thing Kurian says he’s looking for as SynthID rolls out — other than whether the system, you know, works — is how and where people want to use it. He’s pretty sure Slides and Docs will need SynthID integration, for one. “When you’re using Slides, you want to know where you’re deriving images from.” But where else? Hassabis suggests SynthID could eventually be offered as a Chrome extension or even built into the browser so it can identify generated images all over the web. But let’s say that happens: should the tool proactively flag everything that might be generated or wait for some kind of query from the user? Is a huge red triangle the right way to say “this was made with AI,” or should it be something more subtle?

Kurian suggests that there might ultimately be lots of user experience options. As long as the underlying tech works consistently, he figures, users could choose how exactly they want it to appear. It could even vary by topic: maybe you don’t much care if the Slides background you’re using was created by humans or AI, but “if you’re in hospitals scanning tumors, you really want to make sure that was not a synthetically generated image.”

The launch of any AI detection tool is guaranteed to be the start of an arms race. In many cases, a losing one: OpenAI has already given up on a tool meant to detect text written by its own ChatGPT chatbot. If SynthID catches on, it will only inspire hackers and developers to find creative ways around the system, which will force Google DeepMind to improve the system, and round and round they’ll go. Hassabis says, with only a smidge of resignation, that the team is ready for that. “It will probably have to be a live solution that we have to update,” he says, “more like antivirus or something like that. You’re always going to have to be alert to a new type of attack and new type of transform.”

For now, that’s still a far-off problem because the whole initial system of AI image creation, use, and detection is controlled by Google. But DeepMind built this with the whole internet in mind, and Hassabis says he’s ready for the long journey of bringing SynthID everywhere it needs to be. But then he catches himself — one thing at a time, he says. “It would be premature to think about the scaling and the civil society debates until we’ve proven out that the foundational piece of the technology works.” That’s the first job and the reason SynthID is launching now. If and when SynthID or something like it really works, then we can start to figure out what it means for life online.

------------
Read More
By: David Pierce
Title: Google made a watermark for AI images that you can’t edit out
Sourced From: www.theverge.com/2023/8/29/23849107/synthid-google-deepmind-ai-image-detector
Published Date: Tue, 29 Aug 2023 13:00:00 +0000