In a turning point in technological security history, hackers suspected of being affiliated with the North Korean government (DPRK) are using Large Language Models (LLMs) to refine their crypto theft operations, according to a warning issued by Google’s Threat Intelligence Group (GTIG). The use of AI to modify malware codes and generate phishing scripts indicates that the field of cyber crime has reached a new and complex stage.
While Artificial Intelligence is revolutionizing its common applications, concerns are mounting about how it is being utilized in the world of cyber crime. The Google report released this week confirms that criminal groups are now using AI models not just for generating text or images, but as a central component of live attack operations.
What is AI Malware? – A Shift in Attack Design
In traditional malware, all its logic is hard-coded. However, new AI-supported malware families have completely changed this method.
New Malware Techniques
GTIG has been tracking at least five distinct AI-enabled malware variants. These have the ability to modify themselves and bypass security tools.
- Dynamic Code Generation: Instead of hard-coding parts of the programming code, these malware families use LLMs like Gemini or Qwen2.5-Coder to dynamically generate malicious scripts at runtime.
- Obfuscation: These malware families use LLMs to obscure their own code to evade detection. This makes it difficult for security systems to identify them.
Just-in-Time Code Creation
GTIG refers to this method as “Just-in-Time Code Creation.” By outsourcing parts of its function to the AI model, the malware can continuously harden and mutate against security systems designed by protective tools. This capability makes dealing with these malware variants highly challenging.
The Threat of DPRK and Crypto Theft
Most concerning about this AI malware threat is the activity of the hacking group suspected of being linked to North Korea, known as UNC1069.
The Target – Crypto Wallets
The UN and several nations have long accused North Korean hacking groups of stealing billions of dollars by attacking cryptocurrency exchanges, DeFi platforms, and individual wallets globally. These thefts are believed to be used to fund North Korea’s nuclear weapons programs.
Utilization of Gemini
According to the Google report, the UNC1069 group directly utilized Google’s Gemini AI model to refine its attacks:
- Analysis: They used Gemini to analyze crypto wallet data.
- Script Generation: They used the AI to create highly believable and targeted Phishing scripts to deceive people.
This incident shows that AI models are no longer just “learning tools” but have become “generation tools” used to create highly sophisticated and targeted cyber attacks.
Social Media and Security Industry Reaction
Google’s report immediately sparked debate across the security sector and on social media.
Concern from Security Experts
Cybersecurity professionals (especially on platforms like X/Twitter and LinkedIn) debated that this development was “expected, yet unprecedented.”
- Hyper-Speed Attack: They warn that AI malware has the ability to evolve and adapt many times faster than human threat analysts.
- Arms Race: One security analyst noted that this will lead to an “arms race” between AI malware and AI-based security systems.
DPRK’s Cyber Strategy
DPRK groups, notably the Lazarus Group, are known for their innovative approaches to crypto theft. The use of AI models demonstrates that they are not hesitant to escalate their stealing capabilities.
Google’s Mitigation Steps and Ongoing Protection
Upon discovering this threat, Google took rapid steps to secure its platform.
Action Taken
Google immediately disabled the accounts used for the malicious activities. Furthermore, it has tightened the safeguards to restrict access to its AI model. The goal is to prevent humans from using these models for harmful activities.
Ethical Concerns
This usage of publicly available LLMs raises ethical concerns about AI companies. It emphasizes the need to combine legal and technical controls to ensure users do not misuse these models.
The New Security Domain
Google’s threat report marks the beginning of a new era in the cybersecurity world. AI-supported malware poses a massive challenge to traditional defense mechanisms. As crypto wallets and financial markets are increasingly targeted by such sophisticated attacks, it is essential for both institutions and individuals to update their security architecture. In the fight against hackers, the use of AI has become an inevitable new reality.









