Headline
The Era of AI-Generated Ransomware Has Arrived
Cybercriminals are increasingly using generative AI tools to fuel their attacks, with new research finding instances of AI being used to develop ransomware.
As cybercrime surges around the world, new research increasingly shows that ransomware is evolving as a result of widely available generative AI tools. In some cases, attackers are using AI to draft more intimidating and coercive ransom notes and conduct more effective extortion attacks. But cybercriminals’ use of generative AI is rapidly becoming more sophisticated. Researchers from the generative AI company Anthropic today revealed that attackers are leaning on generative AI more heavily—sometimes entirely—to develop actual malware and offer ransomware services to other cybercriminals.
Ransomware criminals have recently been identified using Anthropic’s large language model Claude and its coding-specific model, Claude Code, in the ransomware development process, according to the company’s newly released threat intelligence report. Anthropic’s findings add to separate research this week from the security firm ESET that highlights an apparent proof of concept for a type of ransomware attack executed entirely by local LLMs running on a malicious server.
Taken together, the two sets of findings highlight how generative AI is pushing cybercrime forward and making it easier for attackers—even those who don’t have technical skills or ransomware experience—to execute such attacks. “Our investigation revealed not merely another ransomware variant, but a transformation enabled by artificial intelligence that removes traditional technical barriers to novel malware development,” researchers from Anthropic’s threat intelligence team wrote.
Over the last decade, ransomware has proven an intractable problem. Attackers have become increasingly ruthless and innovative so victims will keep paying out. By some estimates, the number of ransomware attacks hit record highs at the start of 2025, and criminals continue to make hundreds of millions of dollars per year. As former US National Security Agency and Cyber Command chief Paul Nakasone put it at the Defcon security conference in Las Vegas earlier this month: “We are not making progress against ransomware.”
Adding AI into the already hazardous ransomware cocktail only increases what hackers may be able to do. According to Anthropic’s research, a cybercriminal threat actor based in the United Kingdom, which is tracked as GTG-5004 and has been active since the start of this year, used Claude to “develop, market, and distribute ransomware with advanced evasion capabilities.”
On cybercrime forums, GTG-5004 has been selling ransomware services ranging from $400 to $1,200, with different tools being provided for different package levels, according to Anthropic’s research. The company says that while GTG-5004’s products include a range of encryption capabilities, different software reliability tools, and methods designed to help the hackers avoid detection, it appears the developer is not technically skilled. “This operator does not appear capable of implementing encryption algorithms, anti-analysis techniques, or Windows internals manipulation without Claude’s assistance,” the researchers write.
Anthropic says it banned the account linked to the ransomware operation and introduced “new methods” for detecting and preventing malware generation on its platforms. These include using pattern detection known as YARA rules to look for malware and malware hashes that may be uploaded to its platforms.
While such activity so far does not appear to be the norm across the ransomware ecosystem, the findings represent a stark warning.
“There are definitely some groups that are using AI to aid with the development of ransomware and malware modules, but as far as Recorded Future can tell, most aren’t,” says Allan Liska, an analyst for the security firm Recorded Future who specializes in ransomware. “Where we do see more AI being used widely is in initial access.”
Separately, researchers at the cybersecurity company ESET this week claimed to have discovered the “first known AI-powered ransomware,” dubbed PromptLock. The researchers say the malware, which largely runs locally on a machine and uses an open source AI model from OpenAI, can “generate malicious Lua scripts on the fly” and uses these to inspect files the hackers may be targeting, steal data, and deploy encryption. ESET believes the code is a proof-of-concept that has seemingly not been deployed against victims, but the researchers emphasize that it illustrates how cybercriminals are starting to use LLMs as part of their toolsets.
“Deploying AI-assisted ransomware presents certain challenges, primarily due to the large size of AI models and their high computational requirements. However, it’s possible that cybercriminals will find ways to bypass these limitations,” ESET malware researchers Anton Cherepanov and Peter Strycek, who discovered the new ransomware, wrote in an email to WIRED. “As for development, it is almost certain that threat actors are actively exploring this area, and we are likely to see more attempts to create increasingly sophisticated threats.”
Although PromptLock hasn’t been used in the real world, Anthropic’s findings further underscore the speed with which cybercriminals are moving to building LLMs into their operations and infrastructure. The AI company also spotted another cybercriminal group, which it tracks as GTG-2002, using Claude Code to automatically find targets to attack, get access into victim networks, develop malware, and then exfiltrate data, analyze what had been stolen, and develop a ransom note.
In the last month, this attack impacted “at least” 17 organizations in government, health care, emergency services, and religious institutions, Anthropic says, without naming any of the organizations impacted. “The operation demonstrates a concerning evolution in AI-assisted cybercrime,” Anthropic’s researchers wrote in their report, “where AI serves as both a technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute manually.”