Last Friday, the estate of famed 20th century American photographer Ansel Adams took to Threads to publicly shame Adobe for allegedly offering AI-generated art “inspired by” Adams’ catalog of work, stating that the company is “officially on our last nerve with this behavior.”

While the Adobe Stock platform, where the images were made available, does allow for AI generated images, The Verge notes that the site’s contributor terms prohibit images “created using prompts containing other artist names, or created using prompts otherwise intended to copy another artist.”

Adobe has since removed the offending images, conceding in the Threads conversation that, “this goes against our Generative AI content policy.”

However, the Adams estate seemed unsatisfied with that response, claiming that it had been “in touch directly” with the company “multiple times” since last August. “Assuming you want to be taken seriously re: your purported commitment to ethical, responsible AI, while demonstrating respect for the creative community,” the estate continued, “we invite you to become proactive about complaints like ours, & to stop putting the onus on individual artists/artists’ estates to continuously police our IP on your platform, on your terms.”

The ability to create high-resolution images of virtually any subject and in any visual style by simply describing the idea with a written prompt has helped launch generative AI into the mainstream. Image generators like Midjourney, Stable Diffusion and Dall-E have all proven immensely popular with users, though decidedly less so with the copyright holders and artists whose styles those programs imitate and whose existing works those AI engines are trained on.

Adobe’s own Firefly generative AI platform was, the company claimed, trained on the its extensive, licensed Stock image library. As such, Firefly was initially marketed as a “commercially safe” alternative to other image generators like Midjourney, or Dall-E, which trained on datasets scraped from the public internet.

However, an April report from Bloomberg found that some 57 million images within the Stock database, roughly 14% of the total, were AI generated, some of which were created by their data-scraping AI competitors.

“Every image submitted to Adobe Stock, including a very small subset of images generated with AI, goes through a rigorous moderation process to ensure it does not include IP, trademarks, recognizable characters or logos, or reference artists’ names,” a company spokesperson told Bloomberg at the time.

Related Posts

New study shows AI isn’t ready for office work

A reality check for the "replacement" theory

Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns

The paper, published on arXiv with the evocative title Reasoning Models Generate Societies of Thought, posits that these models don't merely compute; they implicitly simulate a "multi-agent" interaction. Imagine a boardroom full of experts tossing ideas around, challenging each other's assumptions, and looking at a problem from different angles before finally agreeing on the best answer. That is essentially what is happening inside the code. The researchers found that these models exhibit "perspective diversity," meaning they generate conflicting viewpoints and work to resolve them internally, much like a team of colleagues debating a strategy to find the best path forward.

Microsoft tells you to uninstall the latest Windows 11 update

https://twitter.com/hapico0109/status/2013480169840001437?s=20