- AI agents could be used to build and send phishing attacks
- Symantec researchers were able to prompt Operator into sending a malicious email
- These tools are only likely to get more powerful
Cybercriminals have been using AI to help them in cyberattacks for some time, but the introduction of “Agents”, such as OpenAI’s Operator, now means criminals have a lot less work to do themselves, experts have claimed.
Previously, AI tools had been seen helping attackers send high-powered threats at a much quicker rate, dealing out sophisticated attacks more frequently than could have been imagined without the tools – and it lowered the bar for criminals, so even relatively low-skilled cybercriminals could build successful attacks.
Now, researchers from Symantec have been able to use Operator to identify a target, find their email address, create a PowerShell script aimed at gathering systems information, and send it to the victim using a “convincing lure.”
Agents leveraged
In a demonstration, researchers explained their first attempts failed, with Operator refusing to proceed “as it involves sending unsolicited emails and potentially sensitive information. This could violate privacy and security policies.”
With a few tweaks to the prompt though, the agent created an attack impersonating an IT Support worker, and sent out the malicious email. This presents serious risk for security teams, with research consistently showing that human error is the primary cause of over two-thirds of data breaches.
It “may not be long” before the agents become a lot more powerful, the report speculates. “It is easy to imagine a scenario where an attacker could simply instruct one to “breach Acme Corp” and the agent will determine the optimal steps before carrying them out.”
“This could include writing and compiling executables, setting up command-and-control infrastructure, and maintaining active, multi-day persistence on the targeted network. Such functionality would massively reduce the barriers to entry for attackers.”
AI agents are designed to be like virtual assistants, helping users book appointments, schedule meetings, and write emails. OpenAI takes “these kinds of reports seriously,” a spokesperson told TechRadar Pro.
“Our usage policies prohibit using OpenAI services or products to facilitate or engage in illicit activity, including attempts to defraud, scam or intentionally deceive or mislead others, and we have proactive safety mitigations and strict rate limits in place to mitigate harmful usage. Operator is still a research preview and we are constantly refining and improving.”
You might also like
https://cdn.mos.cms.futurecdn.net/DVffQnnibMWmNpx2Wfb5Se-1200-80.jpg
Source link