
In most industries, discussions about AI revolve around four themes: ethics, return on investment, the risk of machines taking human jobs, and growing energy demand. In cybersecurity, the picture is different.
Here, AI has already become an effective weapon for attackers that fuels ransomware campaigns and enables malicious tools to write their own code, bypass CAPTCHAs, and drive increasingly destructive DDoS attacks.
AI has firmly established itself as part of the cybercriminal toolkit. Research from MIT Sloan shows that in 2023–2024, 80% of ransomware attacks already relied on AI in some form. Fast forward to 2025, and the trend is accelerating.
Specialized models like GhostGPT, stripped of ethical safeguards, are now readily available for all types of cybercriminal activity from writing phishing emails to generating malicious code and creating malicious websites.
Bots such as AkiraBot use AI to bypass CAPTCHA and flood sites with spam. And in late August 2025, ESET researchers uncovered PromptLock, the first ransomware written by AI, demonstrating how malicious code can now be generated on the fly by a large language model (LLM), rather than hardcoded into an executable by human authors.
These examples show that attackers are adopting AI at scale. That makes traditional defense mechanisms far less effective. And DDoS protection is no exception.
Why this matters for DDoS
DDoS attacks take many forms, but the hardest to mitigate are application-layer (L7) attacks. They overwhelm web servers with traffic that looks legitimate.
The near-universal use of HTTPS on modern websites makes it even harder to separate malicious requests from genuine user activity, since almost all traffic is now encrypted.
For years, the basic countermeasure was to separate humans from bots and block the latter.
This is how CAPTCHAs (an acronym for ‘Completely Automated Public Turing test to tell Computers and Humans Apart’), clicking a box, typing distorted text, or identifying traffic lights and fire hydrants, became so widespread.
The underlying assumption was that humans could pass such challenges, while bots would fail.
This assumption is no longer relevant. Malware equipped with AI can now solve CAPTCHAs and blend into legitimate traffic, silently contributing to botnets.
That is confirmed by studies, including last year’s research from ETH Zurich. The scientists created the AI model that solved Google‘s popular reCAPTCHAv2 version of CAPTCHA (the one with bicycles, bridges, etc) as well as humans.
Simply put, defenders can no longer reliably distinguish humans from bots, as AI has become advanced enough to mimic the behavior of an average human user.
This raises the stakes for all organizations, but the impact will be felt most acutely by large enterprises. For them, the risks go far beyond temporary disruption.
A successful AI-driven DDoS attack can lead to severe reputational damage, loss of customer trust, and, for publicly traded companies, a hit to investor confidence and even stock price declines.
From CAPTCHAs to intent-based filtering
If distinguishing bots from humans is no longer viable, what will replace them?
The answer is intent-based filtering. Instead of asking whether a visitor is human or machine, this approach evaluates their behavior: what they are doing on the site and whether their intentions are productive or destructive?
Is their activity consistent with genuine customer behavior at the website such as reading content, completing transactions, requesting reasonable amounts of data? Or does it resemble meaningless page-grinding, designed only to generate load?
By shifting the focus from intelligence tests, which are no longer reliable, to behavioral intent, defenders gain a chance to spot AI-driven bots even when they convincingly mimic human users.
This transition is now a baseline for defending against application-layer DDoS in the era of AI-enabled malware, and organizations must adapt quickly. For enterprises, the priority is to invest in DDoS mitigation platforms that already support intent-based filtering, not just CAPTCHA-based detection.
They also need to deploy layered monitoring across applications, networks, and endpoints to catch anomalies early, and run regular stress tests that simulate AI-enhanced DDoS scenarios to ensure resilience under real-world conditions.
At the same time, it is important to note that most managed security providers still do not offer intent-based filtering.
That means enterprises must carefully evaluate vendors to ensure their defenses are adequate for the new generation of threats.
Finally, every organization should maintain a clear incident-response playbook that defines responsibilities and outlines how to communicate with customers in the event of downtime.
Are you ready for the new challenge?
Cybersecurity has long been on the edge of transformation.
While everywhere else the negative impact of rapid AI adoption is still under debate, here it has already become a clear menace.
And it forces companies to rethink how they protect their systems, test their resilience, and prepare for the next wave of attacks which will no doubt be AI-driven.
Choosing the right security tools and providers will be critical to get ready for this new reality.
We rank the best Antivirus Software.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
https://cdn.mos.cms.futurecdn.net/JFKDCP2HdEKqSGJCkLNprB-1100-80.jpg
Source link




