AI-generated images have become increasingly predominant in the results of Google searches in recent months, crowding out legitimate results and making it harder for users to find what they’re actually looking for. In response, Google announced on Tuesday that it will begin labeling AI-generated and AI-edited image search results in the coming months.

The company will flag such content through the “About this image” window and it will be applied to Search, Google Lens, and Android’s Circle to Search features. Google is also applying the technology to its ad services and is considering adding a similar flag to YouTube videos, but will “have more updates on that later in the year,” per the announcement post.

Google will rely on Coalition for Content Provenance and Authenticity (C2PA) metadata to identify AI-generated images. That’s an industry group Google joined as a steering committee member earlier in the year. This “C2PA metadata” will be used to track the image’s provenance, identifying when and where an image was created, as well as the equipment and software used in its generation.

So far, a number of industry heavyweights have joined the C2PA, including Amazon, Microsoft, OpenAI, and Adobe. However, the standard itself has received little attention from hardware manufacturers and can currently only be found on a handful of Sony and Leica camera models. A few prominent AI-generation tool developers have also declined to adopt the standard, such as Black Forrest Labs, which makes the Flux model that Grok leverages for its image generation.

The number of online scams utilizing AI-generated deepfakes have exploded in the past two years. In February, for example, a Hong Kong-based financier was duped into transferring $25 million to scammers who posed as the company’s CFO during a video conference call. In May, a study by verification provider Sumsub found that scams using deepfakes increased 245% globally between 2023 and 2024, with a 303% increase in the U.S. specifically.

“The public accessibility of these services has lowered the barrier of entry for cyber criminals,” David Fairman, chief information officer and chief security officer of APAC at Netskope told CNBC in May. “They no longer need to have special technological skill sets.”

Related Posts

New study shows AI isn’t ready for office work

A reality check for the "replacement" theory

Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns

The paper, published on arXiv with the evocative title Reasoning Models Generate Societies of Thought, posits that these models don't merely compute; they implicitly simulate a "multi-agent" interaction. Imagine a boardroom full of experts tossing ideas around, challenging each other's assumptions, and looking at a problem from different angles before finally agreeing on the best answer. That is essentially what is happening inside the code. The researchers found that these models exhibit "perspective diversity," meaning they generate conflicting viewpoints and work to resolve them internally, much like a team of colleagues debating a strategy to find the best path forward.

Microsoft tells you to uninstall the latest Windows 11 update

https://twitter.com/hapico0109/status/2013480169840001437?s=20