- GTIG finds threat actors are cloning mature AI models using distillation attacks
- Sophisticated malware can use AI to manipulate code in real time to avoid detection
- State-sponsored groups are creating highly convincing phishing kits and social engineering campaigns
If you’ve used any modern AI tools, you’ll know they can be a great help in reducing the tedium of mundane and burdensome tasks.
Well, it turns out threat actors feel the same way, as the latest Google Threat Intelligence Group AI Threat Tracker report has found that attackers are using AI more than ever.
From figuring out how AI models reason in order to clone them, to integrating it into attack chains to bypass traditional network-based detection, GTIG has outlined some of the most pressing threats – here’s what they found.
How threat actors use AI in attacks
For starters, GTIG found threat actors are increasingly using ‘distillation attacks’ to quickly clone large language models so that they can be used by threat actors for their own purposes. Attackers will use a huge volume of prompts to find out how the LLM reasons with queries, and then use the responses to train their own model.
Attackers can then use their own model to avoid paying for the legitimate service, use the distilled model to analyze how the LLM is built, or search for ways to exploit their own model which can also be used to exploit the legitimate service.
AI is also being used to support intelligence gathering and social engineering campaigns. Both Iranian and North Korean state-sponsored groups have utilized AI tools in this way, with the former using AI to gather information on business relationships in order to create a pretext for contact, and the latter using AI to amalgamate intelligence to help plan attacks.
GTIG has also spotted a rise in AI usage for creating highly convincing phishing kits for mass-distribution in order to harvest credentials.
Moreover, some threat actors are integrating AI-models into malware to allow it to adapt to avoid detection. One example, tracked as HONESTCUE, dodged network-based detection and static analysis by using Gemini to re-write and execute code during an attack.
But not all threat actors are alike. GTIG has also noted that there is a serious demand for custom AI tools built for attackers, with specific calls for tools capable of writing code for malware. For now, attackers are reliant on using distillation attacks to create custom models to use offensively.
But if such tools were to become widely available and easy to distribute, it is likely that threat actors would quickly adopt malicious AI into attack vectors to improve the performance of malware, phishing, and social engineering campaigns.
In order to defend against AI-augmented malware, many security solutions are deploying their own AI tools to fight back. Rather than relying on static analysis, AI can be used to analyze potential threats in real time to recognize the behavior of AI-augmented malware.
AI is also being employed to scan emails and messages in order to spot phishing in real time at a scale that would require thousands of hours of human work.
Moreover, Google is actively seeking out potentially malicious AI usage in Gemini, and has deployed a tool to help seek out software vulnerabilities (Big Sleep), and a tool to help in patching vulnerabilities (CodeMender).

The best antivirus for all budgets
https://cdn.mos.cms.futurecdn.net/2GxzxstGJJpm8aJiATXE26-2560-80.jpg
Source link
benedict.collins@futurenet.com (Benedict Collins)




