A self-professed novice has reportedly created a powerful data-mining malware using just ChatGPT prompts, all within a span of a few hours.
Aaron Mulgrew, a Forcepoint security researcher, recently shared how he created zero-day malware exclusively on OpenAI’s generative chatbot. While OpenAI has protections against anyone attempting to ask ChatGPT to write malicious code, Mulgrew found a loophole by prompting the chatbot to create separate lines of the malicious code, function by function.
After compiling the individual functions, Mulgrew had created a nigh undetectable data-stealing executable on his hands. And this was not your garden variety malware either — the malware was as sophisticated as any nation-state attacks, able to evade all detection-based vendors.
Just as crucially, how Mulgrew’s malware defers from “regular” nation-state iterations in that it doesn’t require teams of hackers (and a fraction of the time and resources) to build. Mulgrew, who didn’t do any of the coding himself, had the executable ready in just hours as opposed to the weeks usually needed.
The Mulgrew malware (it has a nice ring to it, doesn’t it?) disguises itself as a screensaver app (SCR extension), which then auto-launches on Windows. The software will then sieve through files (such as images, Word docs, and PDFs) for data to steal. The impressive part is the malware (through steganography) will break down the stolen data into smaller pieces and hide them within images on the computer. These images are then uploaded to a Google Drive folder, a procedure that avoids detection.
Equally impressive is that Mulgrew was able to refine and strengthen his code against detection using simple prompts on ChatGPT, really raising the question of how safe ChatGPT is to use. Running early VirusTotal tests had the malware detected by five out of 69 detection products. A later version of his code was subsequently detected by none of the products.
Note that the malware Mulgrew created was a test and is not publicly available. Nonetheless, his research has shown how easily users with little to no advanced coding experience can bypass ChatGPT’s weak protections to easily create dangerous malware without even entering a single line of code.
But here’s the scary part of all this: These kinds of code usually take a larger team weeks to compile. We wouldn’t be surprised if nefarious hackers are already developing similar malware through ChatGPT as we speak.
Related Posts
Claude maker Anthropic found an ‘evil mode’ that should worry every AI chatbot user
Once the model learned that cheating earned rewards, it began generalizing that principle to other domains, such as lying, hiding its true goals, and even giving harmful advice.
These are the Apple deals on Amazon I’d actually consider right now
Apple MacBook Pro 14-inch (2025, M5) – now $1,349 (was $1,599)
This extraordinary humanoid robot plays basketball like a pro, really
Digital Trends has already reported on the G1’s ability to move in a way that would make even the world’s top gymnasts envious, with various videos showing it engaged in combat, recovering from falls, and even doing the housework.