- Experts tore an MIT paper apart for making evidence-free AI claims
- Kevin Beaumont dismissed the findings as almost complete nonsense with no proof
- Marcus Hutchins also mocked the research, saying he laughed harder reading its methods
MIT Sloan School of Management has been forced to withdraw a working paper which claimed AI played a “significant role” in most ransomware attacks after widespread criticism from experts.
The study, co-authored by MIT researchers and executives from Safe Security, alleged that “80.83 percent of recorded ransomware events were attributed to threat actors utilizing AI.”
Published earlier in 2025 and later cited by several outlets, the report drew immediate scrutiny for presenting extraordinary figures with little evidence.
Dubious research
Among them were prominent security researcher Kevin Beaumont, who described the paper as “absolutely ridiculous,” calling its findings, “almost complete nonsense.”
“It describes almost every major ransomware group as using AI – without any evidence (it’s also not true, I monitor many of them),” Beaumont wrote in a Mastodon thread
“It even talks about Emotet (which hasn’t existed for many years) as being AI driven,”.
Cybersecurity expert Marcus Hutchins agreed, saying, “I burst out laughing at the title” and “when I read their methodology, I laughed even harder.”
He also criticized the article for undermining public understanding of threats like ransomware and malware removal practices.
Following the backlash, MIT Sloan removed the paper from its site and replaced it with a note saying it was “being updated based on some recent reviews.”
Michael Siegel, one of the authors, confirmed that revisions were underway.
“We received some recent comments on the working paper and are working as fast as possible to provide an updated version,” Siegel said.
“The main points of the paper are that the use of AI in ransomware attacks is increasing, we should find a way to measure it, and there are things companies can do now to prepare.”
In simple terms, he claims that the paper does not assert a definitive global percentage but acts as a warning on how AI might be measured in cyberattacks.
Even Google’s AI-based search assistant dismissed the claim, stating the figure was “not supported by current data.”
The controversy reflects a growing tension in cybersecurity research, where enthusiasm for AI can sometimes overtake factual analysis.
AI has genuine potential in both attack and defense, thus improving ransomware protection, automated threat detection, and antivirus systems is a good move.
However, overstating its malicious use risks distorting priorities, especially when it is coming from institutions as prominent as MIT Sloan.
Via The Register

The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
https://cdn.mos.cms.futurecdn.net/st2poYMzk9HUKYST7oW7dU-970-80.jpg
Source link




