
Anthropic’s recent disruption of the first reported AI cyber espionage campaign reveals how rapidly threat actors are weaponizing agentic AI at scale.
The emergence of agentic AI, coupled with sophisticated offensive infrastructure, has given threat actors a blueprint for operationalizing agentic tool chains far more effectively than defenses can currently counter. Right now, threat actors have the upper hand.
This reality is already evident in the UK, where 61% of executives cite cybersecurity threats as their top agentic AI concern.
Recognizing that the power balance has shifted, the Department for Science, Innovation and Technology’s recent UK Cyber Action Plan aims to strengthen the security and resilience of digital public services against precisely these threats.
This is hardly surprising. OpenAI’s Aardvark and the work of the XBOW team, among others, show how skilled offensive operators training purpose-built threat-hunting agents can outperform individual researchers.
Similar capabilities are now available to threat actors. They now have a clear roadmap for using AI to execute multi-stage attacks with complete autonomy, no longer limited by human-in-the-loop constraints.
Left unchecked, this sort of agentic attack chain will pose significant problems for security teams. However, attackers aren’t the only ones who can harness the same consolidation of technical capabilities and processes – security teams can employ the same approach to strengthen their defenses.
What was once a vulnerability is now an exploit
AI agents have drastically shortened the time between vulnerability discovery and exploitation. A 2024 research paper showed how GPT-4, when provided with CVE descriptions, could exploit real-world one-day vulnerabilities on its own. In fact, it succeeded with 87% of the vulnerabilities tested.
More recently, Google announced that its Big Sleep research has found many zero-day vulnerabilities in open source projects. A collaboration between DeepMind and Project Zero, Big Sleep included a multi-phase set of agents designed to discover software vulnerabilities and build working exploits.
While Big Sleep empowered industry security leaders to prevent those exploits from materializing, there’s no doubt that malicious actors are using the same techniques to attack targets.
Threat actors now operate chains of agents
This is no longer merely hypothetical. Adversaries are breaking down attack phases into separate agentic workloads and using chains of agents to execute each phase autonomously.
For example, North Korean and Chinese threat actors are now using AI at every stage of their operations, from victim profiling to data analysis and identity creation.
Anthropic’s cyber espionage report found that Chinese threat actors using AI agents to perform 80-90% of attack operations independently, including identifying valuable infrastructure targets, discovering vulnerabilities, exploiting them, and gathering credentials.
Human intervention was required fewer than seven times at critical decision points. Operating at thousands of requests per second, AI agents drastically cut the timeline and manpower required to execute the campaign.
Anthropic’s 2025 Threat Intelligence Report also revealed that AI is enabling lower-skilled threat actors to learn and execute more sophisticated tactics, techniques, and procedures.
Cybercriminals with minimal technical expertise used Claude to develop and sell numerous ransomware variants for $400-$1,200 on internet forums. In this case, the criminals relied solely on AI to implement encryption algorithms and evasion techniques.
Thanks to AI, it’s now cheaper than ever for attackers to weaponize exploits. Agentic AI enables those attacks to become more autonomous, allowing for more threat actors to launch campaigns at scale.
These trends will likely lead to significantly more high-volume campaigns targeting companies’ most valuable data assets.
To fight this threat, defenders must respond with equally agentic defenses.
Using agents to defend against agents
As attackers build agentic toolchains, internal red teams, and defensive practitioners must also scale up their own use of agentic AI. Defenders need AI agents that leverage internal system resources to provide context.
They can then use agents to divide defensive tasks into separate workloads, chaining them together to identify and fix vulnerabilities before they get exploited.
For these agents to execute effectively, they need a deep knowledge of your software environment. Agentic defenses require infrastructure that delivers the right data and context to the agents you deploy. Knowledge graphs, which map relationships across entire codebases, are one tool that can provide this foundation.
This approach is already moving from theory to practice. We’re currently working with customers to define end-to-end agentic vulnerability triage and remediation processes that address the thousands of vulnerabilities in their estate that has grown over time.
These agent chains can autonomously review vulnerability reports, prioritize risks based on actual exploitability, identify false positives, raise issues for valid concerns, suggest fixes, and submit merge requests.
By integrating knowledge graphs to evaluate the potential “blast radius” of each change, the agents can recommend the appropriate level of human review required before pushing to production, reserving intensive human oversight for high-risk changes while accelerating low-risk patches.
Further to prevention, agentic defenses also enable resilience. Defenders can break down tasks into detection and remediation activities within the context of organizational runbooks.
Agents can handle everything from identification through investigation and containment, to remediation and post-mortem, minimizing dwell time and limiting impact.
These use cases represent steps that we, as defenders, can take to scale up our agentic defenses. The convergence of technical capabilities and processes gives us new tools to help combat threat actors and scale up our defensive operations.
With 57% of UK executives implementing regulatory-aligned governance frameworks and the UK’s Cyber Action Plan addressing these emerging threats, the message is clear – attackers aren’t waiting to make the most of those tools. And we shouldn’t either.
We’ve featured the best endpoint protection software.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
https://cdn.mos.cms.futurecdn.net/Hc6oTvWTHfb3ETNovTaDxG-1920-80.jpg
Source link




