Adobe is giving creators a way to prove their art isn’t AI slop

    By Andrew Tarantola
Published October 31, 2024

With AI slop taking over the web, being able to confirm a piece of content’s provenance is more important than ever. Adobe announced on Tuesday that it will begin rolling out a beta of its Content Authenticity web app in the first quarter of 2025, enabling creators to digitally certify their works as human-made, and is immediately launching a Content Authenticity browser extension for Chrome to help protect content creators until the web app arrives.

Adobe’s digital watermarking relies on a combination of digital fingerprinting, watermarking, and cryptographic metadata to certify the authenticity of images, video, and audio files. Unlike traditional metadata that is easily circumvented with screenshots, Adobe’s system can still identify the creator of a registered file even when the credentials have been scrubbed. This enables to company to “truly say that wherever an image, or a video, or an audio file goes, on anywhere on the web or on a mobile device, the content credential will always be attached to it,” Adobe Senior Director of Content Authenticity Andy Parsons told TechCrunch.

Both the Chrome extension and the forthcoming web app will be available to the public, whether you’re one of Adobe’s 33 million paying subscribers and Firefly users. “We’re going to release the Content Authenticity browser extension for Chrome as part of this software package, and also something we call the Inspect tool within the Adobe Content Authenticity website,” Parsons said. “These will help you discover and display content credentials wherever they are associated with content anywhere on the web, and that can show you again who made the content, who gets credit for it.”

This announcement comes amid Adobe’s larger push for building trust through transparency in digital content. The company has founded a pair of industry groups — the Content Authenticity Initiative (CAI) and the foundational open standards consortium the Coalition for Content Provenance and Authenticity (C2PA) — to spur adoption of its Content Authenticity initiative. To date, the groups have attracted more than 2,000 signatories, including nearly every major camera manufacturer, along with AI front runners Microsoft and OpenAI, as well as social media platforms including TikTok, Instagram, and Facebook.

Related Posts

New study shows AI isn’t ready for office work

A reality check for the "replacement" theory

Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns

The paper, published on arXiv with the evocative title Reasoning Models Generate Societies of Thought, posits that these models don't merely compute; they implicitly simulate a "multi-agent" interaction. Imagine a boardroom full of experts tossing ideas around, challenging each other's assumptions, and looking at a problem from different angles before finally agreeing on the best answer. That is essentially what is happening inside the code. The researchers found that these models exhibit "perspective diversity," meaning they generate conflicting viewpoints and work to resolve them internally, much like a team of colleagues debating a strategy to find the best path forward.

Microsoft tells you to uninstall the latest Windows 11 update

https://twitter.com/hapico0109/status/2013480169840001437?s=20