OpenAI, the brains behind the popular ChatGPT generative AI solution, released a report saying it blocked more than 20 operations and dishonest networks worldwide in 2024 so far. The operations differed in objective, scale, and focus, and were used to create malware and write fake media accounts, fake bios, and website articles.

OpenAI confirms it has analyzed the activities it has stopped and provided key insights from its analysis. “Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences,” the report says.

This is especially important given that it’s an election year in various countries, including the United States, Rwanda, India, and the European Union. For example, in early July, OpenAI banned a number of accounts that created comments about the elections in Rwanda that were posted by different accounts on X (formerly Twitter). So, it’s good to hear that OpenAI says the threat actors couldn’t make much headway with the campaigns.

Another win for OpenAI is disrupting a China-based threat actor known as “SweetSpecter” that attempted spear-phishing OpenAI employees’ corporate and personal addresses. The report goes on to say that in August, Microsoft exposed a set of domains that they attributed to an Iranian covert influence operation known as “STORM-2035.” “Based on their report, we investigated, disrupted and reported an associated set of activity on ChatGPT.”

OpenAI also says the social media posts that their models created didn’t get much attention because they received few or no comments, likes, or shares. OpenAI ensures they will continue to anticipate how threat actors use advanced models for harmful ends and plan to take the necessary actions to stop it.

Related Posts

New study shows AI isn’t ready for office work

A reality check for the "replacement" theory

Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns

The paper, published on arXiv with the evocative title Reasoning Models Generate Societies of Thought, posits that these models don't merely compute; they implicitly simulate a "multi-agent" interaction. Imagine a boardroom full of experts tossing ideas around, challenging each other's assumptions, and looking at a problem from different angles before finally agreeing on the best answer. That is essentially what is happening inside the code. The researchers found that these models exhibit "perspective diversity," meaning they generate conflicting viewpoints and work to resolve them internally, much like a team of colleagues debating a strategy to find the best path forward.

Microsoft tells you to uninstall the latest Windows 11 update

https://twitter.com/hapico0109/status/2013480169840001437?s=20