Headline
Scammers Exploit Grok AI With Video Ad Scam to Push Malware on X
Researchers at Guardio Labs have uncovered a new “Grokking” scam where attackers trick Grok AI into spreading malicious…
Researchers at Guardio Labs have uncovered a new “Grokking” scam where attackers trick Grok AI into spreading malicious links on X. Learn how it works and what experts are saying.
A new, ingenious cybersecurity scam has been discovered that is abusing the popular AI assistant Grok on the social media platform X (formerly Twitter) to bypass security controls and spread malicious links. This scam was uncovered by researcher Nati Tal, the Head of Cyber Security Research at Guardio Labs, who has named this new technique “Grokking.”
In a series of X posts, Tal explained how this scam works. It starts with malicious video ads that are often filled with questionable content. These ads are designed to grab attention but purposely do not have a clickable link in the main post, which helps them avoid being flagged by X’s security filters. The bad actors instead hide the malicious link in a small “From:” metadata field, which appears to be a blind spot in the platform’s scanning.
The scam’s most clever part comes next. The same attackers then ask Grok a simple question, such as “What is the link to this video?” in a reply to the ad. Grok reads the hidden “From:” field and posts the full malicious link in a new, fully clickable reply.
Because Grok is a trusted, system-level account on X, its response gives the malicious link a massive boost in credibility and visibility. As cybersecurity experts Ben Hutchison and Andrew Bolster point out, this makes the AI itself a “megaphone” for malicious content, exploiting trust rather than just a technical flaw. The links ultimately lead users to dangerous sites, tricking them with fake CAPTCHA tests or downloading information-stealing malware.
By manipulating the AI, attackers turn the very system meant to enforce restrictions into an amplifier for their malicious content. As a result, links that should have been blocked are instead promoted to millions of unsuspecting users.
Reportedly, some of these ads have received millions of impressions, with some campaigns reaching over 5 million views. This attack shows that AI-powered services, while helpful, can be manipulated into becoming powerful tools for cybercriminals.
****Expert Perspectives****
In response to this research, cybersecurity experts have shared their perspectives exclusively with Hackread.com. Chad Cragle, Chief Information Security Officer at Deepwatch, explained the core mechanism: “Attackers hide links in the ad’s metadata and then ask Grok to ‘read it out loud.’” For security teams, he says, platforms need to scan hidden fields, and organisations must train users that even a “verified” assistant can be fooled.
Andrew Bolster, Senior R&D Manager at Black Duck, categorises Grok as a high-risk AI system that fits what is called the “Lethal Trifecta.” He explains that, unlike traditional bugs, in the AI landscape, this kind of manipulation is almost a “feature,” as the model is designed to respond to content regardless of its intent.