As the public adjusts to trusting artificial intelligence, there also brews a perfect environment for hackers to trap internet users into downloading malware.
The latest target is the Google Bard chatbot, which is being used as a decoy for those online to unknowingly click ads that are infected with nefarious code. The ads are styled as if they are promoting Google Bard, making them seem safe. However, once clicked on, users will be directed to a malware-ridden webpage instead of an official Google page.
Security researchers at ESET first observed the discrepancies in the ads, which include several grammar and spelling errors in the copy, as well as a writing style that is not up to par with Google’s standard, according to TechRadar.
The ad directs users to the webpage of a Dublin-based firm called rebrand.ly instead of a Google-hosted domain, where you would actually learn more about the Bard chatbot. Researchers have not confirmed, but have noted and warned that accessing such pages while being logged into browser accounts could leave your private data susceptible to being hacked.
Additionally, the ad includes a download button, which when accessed downloads a file that appears as a personal Google Drive space; however, it is actually a confirmed malware called GoogleAIUpdate.rar.
ESET researcher, Thomas Uhlemann noted as of Monday, the “campaign was still visible in different variations.”
He added this is one of the larger cyberattacks of its kind he has seen, some including fake ads for meta AI or different Google AI dupe marketing.
Bard is currently the biggest competition of OpenAI’s ChatGPT chatbot. ChatGPT experienced a similar cyberattack in late February when an info-stealing malware called Redline was observed by Security researcher Dominic Alvieri. The malware was hosted on the website chat-gpt-pc.online, which featured ChatGPT branding and was being advertised on a Facebook page as a legitimate OpenAI link to persuade people into accessing the infected site.
Alvieri also found fake ChatGPT apps on Google Play and various other third-party Android app stores, which could send malware to devices if downloaded.
ChatGPT has been a major target of bad actors, especially since it introduced its $20 monthly ChatGPT Plus tier in early February. Bad actors have even gone as far as using the chatbot to create malware. However, this is a rigged version of OpenAI’s GPT-3 API that was programmed to generate malicious content, such as text that can be used for phishing emails and malware scripts.
Related Posts
New study shows AI isn’t ready for office work
A reality check for the "replacement" theory
Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns
The paper, published on arXiv with the evocative title Reasoning Models Generate Societies of Thought, posits that these models don't merely compute; they implicitly simulate a "multi-agent" interaction. Imagine a boardroom full of experts tossing ideas around, challenging each other's assumptions, and looking at a problem from different angles before finally agreeing on the best answer. That is essentially what is happening inside the code. The researchers found that these models exhibit "perspective diversity," meaning they generate conflicting viewpoints and work to resolve them internally, much like a team of colleagues debating a strategy to find the best path forward.
Microsoft tells you to uninstall the latest Windows 11 update
https://twitter.com/hapico0109/status/2013480169840001437?s=20