North Korean state-backed hackers have discovered how to manipulate AI systems to create realistic-looking military identification cards for use in sophisticated phishing campaigns.
According to Fox News, a North Korean hacking group known as Kimsuky has been using ChatGPT to generate fake South Korean military IDs that are then attached to phishing emails impersonating legitimate defense institutions.
South Korean cybersecurity firm Genians revealed the operation in a recent blog post, explaining that while ChatGPT has built-in safeguards against creating government IDs, the hackers circumvented these protections by framing their prompts as requests for "sample designs for legitimate purposes."
How Sophisticated State Actors Exploit AI Tools
Kimsuky is a well-established threat actor with a documented history of global espionage campaigns. The U.S. Department of Homeland Security previously identified the group as "most likely tasked by the North Korean regime with a global intelligence-gathering mission" targeting South Korea, Japan, and the United States.
Sandy Kronenberg, CEO and founder of cybersecurity company Netarx, warned that generative AI has dramatically lowered barriers to sophisticated attacks. The real danger, according to Kronenberg, isn't just a single forged document but how these tools enable coordinated multi-channel deception campaigns that combine fake documents with follow-up communications across different mediums.
North Korea isn't operating in isolation, as Chinese hackers have also been exploiting AI capabilities for their own cyber operations. Anthropic, creator of the Claude chatbot, reported that a Chinese hacker used their AI assistant as a comprehensive cyberattack tool for over nine months, targeting Vietnamese telecommunications, agriculture systems, and government databases.
Chinese Hackers Also Weaponizing AI Platforms
OpenAI has identified Chinese hackers using ChatGPT to create password brute-forcing scripts and gather sensitive information about U.S. defense networks. Some operations have even leveraged the AI platform to generate fake social media content designed to amplify political divisions within the United States.
Google has observed similar patterns with its Gemini model, reporting that Chinese groups used it to troubleshoot malicious code and expand network access. North Korean hackers have also utilized Gemini for creating convincing cover letters and analyzing IT job postings as part of their infiltration strategies.
The shift toward AI-powered hacking represents a fundamental change in the threat landscape that traditional security training hasn't prepared for. Clyde Williamson, senior product security architect at data security company Protegrity, called the North Korean military ID forgeries "a wake-up call" that renders conventional phishing detection advice obsolete.
Why Traditional Security Training Falls Short
For years, cybersecurity professionals have trained employees to identify phishing attempts by looking for telltale signs like typos, formatting issues, or awkward language. AI-generated content effectively eliminates these traditional red flags, producing polished, professional-looking communications that appear legitimate.
Williamson emphasized that security training needs a complete reset, focusing on context, intent, and verification rather than superficial content quality. He recommends teaching people to slow down, verify sender information through alternative channels, and feel comfortable reporting suspicious communications without fear of embarrassment.
The technical response must also evolve, with companies investing in email authentication, phishing-resistant multi-factor authentication, and real-time monitoring systems. Without adaptation, average users will find themselves increasingly vulnerable to these sophisticated AI-powered deception techniques.
Practical Protection Strategies Against AI Scams
Experts recommend several concrete steps to protect against AI-powered scams in this new threat environment. First and foremost, slow down when receiving urgent communications and verify requests through trusted channels before taking action.
Implementing strong antivirus software across all devices can help catch malicious links and downloads before they cause damage. These tools can also alert users to potential phishing attempts and ransomware schemes, providing an essential layer of protection for personal information.
Using a personal data removal service can reduce risk by scrubbing sensitive information from data broker sites that scammers often mine for targeting details. While no service can guarantee the complete removal of all personal data from the internet, these services actively monitor and systematically remove information from hundreds of websites.