Google is rolling out an important new update that will make it easier to find photos that have been generated or edited with AI.
Starting this week, Google Photos will add a new “AI Information” section to the image details view, letting users know if the image they’re viewing was created or enhanced using AI. This information lets users know whether an image was generated by an AI tool like Google’s Gemini app or manipulated by an AI-powered feature like Google Photos’ Magic Eraser. You can check it more easily.
First leaked by tipster Assemble Debug earlier this month, these new AI transparency features were officially announced in a recent Google Photos blog post, detailing how they can help users spot AI in images. I am.
As revealed in the screenshot above, Android permissionsthe new AI information section includes “Credit: Created with Google AI” and “Digital Source Type: Created with Generated AI” to indicate that the selected image is AI-generated rather than a real photo. Something becomes clear. These details come from information called IPTC metadata. This metadata is optionally embedded whenever the image file is saved or edited.
Google says it will also show when tools like Magic Editor, Magic Eraser, and Zoom Enhance were used. And because IPTC metadata is an industry standard format, you can see similar AI information exposed by apps from many other companies, including Adobe and Microsoft.
Why is AI information important?
As concerns grow over the potential for misuse of AI, Google’s latest steps increase transparency and awareness of available tools, giving users a first line of defense against image-based misinformation and deepfakes. I will do it. If you’re unsure about an image you’ve found online, download it and open it in Google Photos to quickly see if an AI tool was used before determining the image’s authenticity.
This change may also alert recipients to the fact that their photos have been edited, so users may want to think twice before sharing edited images. This information could have a positive impact on eradicating misinformation and deepfakes and projecting realistic body images online.
Good start, but not a solution to the problem
While Google’s latest move will undoubtedly bring increased transparency to users, it’s important to note that Google is simply passing on information that is already voluntarily stored within images. It’s important. Google’s AI-powered tools automatically embed this information, but other tools may not. Additionally, it’s easy to edit or remove IPTC metadata before sharing an image, and all you have to do is take a screenshot of the image, making it easy for anyone determined to hide their use of AI to do so. You can. Many online services automatically remove such data when content is uploaded, so this “AI information” can get lost along the digital journey from the original creator to the user. It not only exists, but it can also be lost. Conversely, it’s just as easy for bad actors to add fake AI tags to real photos to discredit them.
However, more advanced technologies are in development. Google is already developing more robust technologies, such as SynthID, which can embed invisible watermarks in AI-generated images, video, text, or audio and later decode them using software tools.
These watermarks are much harder to remove because they are deeply tied to the actual media they protect, rather than simple tags that can be removed or modified. For example, removing an artist’s name from a label next to a painting is easy, but hiding the author’s style is much more complicated because it is an essential part of the work. SynthID watermarks are like imperceptible evidence handwriting that can only be detected by a skilled forensic art analyst.
Google Photos’ new AI information can be found by selecting a photo in the app and swiping up, or by tapping the (i) icon on the web version of photos.google.com.
Follow @paul_monckton on Instagram.