Headline
North Korea’s Kimsuky Group Uses AI-Generated Military IDs in New Attack
North Korea’s Kimsuky hackers use AI-generated fake military IDs in a new phishing campaign, GSC warns, marking a…
North Korea’s Kimsuky hackers use AI-generated fake military IDs in a new phishing campaign, GSC warns, marking a shift from past ClickFix tactics.
Kimsuky, a notorious North Korean hacking group, is now using fake military ID cards created with artificial intelligence (AI) tools to pull off its latest phishing campaign. According to cybersecurity firm Genians Security Center (GSC), this is a new step from the group’s past ClickFix tactics, which previously tricked victims into running malicious commands by presenting them with fake security pop-ups.
The new approach was first detected in July 2025 when attackers sent emails that looked like they were from a legitimate South Korean defence institution. These messages were designed to grab attention, usually pretending to be about a new ID card for military personnel.
The first image is the original phishing email sent by the threat actors (Source: GSC). The second image has been translated by Hackread.com with the help of an AI image translator
The bait is a ZIP file containing what appears to be a draft of a real military ID. But there’s a catch: the convincing photo on the ID isn’t real. It’s an AI-generated deepfake with a near-perfect 98% certainty of being fake, created using widely available AI tools like ChatGPT.
AI-generated fake military IDs (Source: GSC)
If an unsuspecting person opens the file, the real attack begins. A hidden malicious program immediately starts running in the background. To avoid detection, it waits a few seconds before secretly downloading a malicious file called LhUdPC3G.bat from a remote server at jiwooeng.co.kr.
Using both batch files and AutoIt scripts, the hackers then install a malicious task named HncAutoUpdateTaskMachine to run every seven minutes, disguised as an update for Hancom Office. Researchers noted that the hackers have used similar tactics in other attacks, with tell-tale strings like “Start_juice” and “Eextract_juice” appearing in their code.
This deepfake military ID campaign shows how the Kimsuky group is constantly changing its tactics, using a more socially engineered decoy to achieve the same goal by getting a victim to run a series of scripts that compromise their computer.
This is not the first time the group has used AI for malicious purposes. In June 2025, OpenAI reported that North Korean threat actors created fake identities with AI to pass technical job interviews. Hackers from China, Russia and Iran have also misused AI tools, particularly ChatGPT, for similar activities.
Ultimately, this latest campaign highlights the need for more advanced security. According to GSC, systems like Endpoint Detection and Response (EDR) are essential to detect and neutralise these types of attacks that rely on obfuscated scripts to hide their malicious activity.