Google to label AI-generated images as viral Trump deepfake

Google is adding features that will help users identify pictures that were generated through artificial intelligence — a move that came after deepfake images of Pope Francis in a Balenciaga puffer jacket and Donald Trump resisting arrest took the internet by storm earlier this year.

The latest move by Google came as critics, including Elon Musk and the “Godfather of AI” Dr. Geoffrey Hinton, warn that AI could exacerbate the spread of online misinformation by providing inaccurate answers or producing images nearly indistinguishable from the real thing.

Google said Wednesday it will add a “markup” in the metadata of photos produced by its own AI models to show that the images are computer-generated.

Google Search will use the metadata to display a warning label when AI-generated images appear in its results.

A caption underneath the image will note that it is “self-labeled as AI generated,” according to an example provided by the company in a blog post.

Images from other prominent publishers, including Midjourney and Shutterstock, will display similar warning labels in the near future.

Additionally, a new tool called “about this image” is set to debut within Google’s search engine in the “coming months,” the company said in a blog post.


Google gave an example of an AI-generated picture of a faked moon landing.
Google

The tab will allow users easy access to information about a particular photo that appears in search results, including when the picture was first “indexed” by Google and where else it has appeared online.

Google gave a hypothetical case involving an AI-generated photo depicting a film crew faking the Apollo 11 moon landing in 1969.

By clicking the “about this image” tab, users would see related articles debunking the image and other key details showing it was fake.

“With this background information on an image, you can get a better understanding of whether an image is reliable — or if you need to take a second look,” the blog post said.


Google
Google will begin adding a “markup” to the metadata of AI generated images.
Google

In March, the AI-generated photos of Pope Francis in a fashion-forward coat and sunglasses went viral on Twitter and other social media platforms.

One post featuring the picture generated nearly 21 million views on Twitter.

Pablo Xavier, the AI artist who allegedly generated the image, claimed that he “didn’t want it [the pictures] to blow up like that” and admitted it’s “definitely scary” that “people are running with it and thought it was real without questioning it.”


Pope Francis
A deepfake image of Pope Francis in a Balenciaga puffer jacket went viral earlier this year.

Donald Trump
An AI-generated image of Donald Trump clashing with the NYPD.
Twitter / Eliot Higgins

In another bizarre case from March, AI-generated deepfake images depicting Trump resisting arrest and clashing with NYPD officers spread rapidly on social media.

Hinton, who recently quit his job at Google so he could freely discuss his concerns about the AI technology he helped create, expressed fears it will be used for various nefarious purposes.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.

Google unveiled its latest AI-centered upgrades at a closely-watched event on Wednesday as it scrambles to keep pace with rival Microsoft, which is a key investor in OpenAI and its immensely popular ChatGPT.

The company also unveiled a $1,800 Pixel Fold smartphone at the event.


Source by [New York Post]

Leave a Reply