- Google publishes new report detailing how criminals are abusing Gemini
- Attackers from Iran, North Korea, Russia, and elsewhere, were mentioned
- Hackers are experimenting, but haven’t found “novel capabilities” just yet
Dozens of cybercriminal organizations from all around the world are abusing Google’s Artificial Intelligence (AI) solution Gemini in their attacks, the company has admitted.
In an in-depth analysis discussing who the threat actors are, and what they’re using the tools for, Google’s Threat Intelligence Group highlighted how the platform has not yet been used to discover new attack methods, but is rather used to fine-tune existing ones.
“Threat actors are experimenting with Gemini to enable their operations, finding productivity gains but not yet developing novel capabilities,” the team said in its analysis. “At present, they primarily use AI for research, troubleshooting code, and creating and localizing content.”
APT42 and many other threats
The biggest Gemini users among cybercriminals are the Iranians, Russians, the Chinese, and North Koreans, who utilize the platform for reconnaissance, vulnerability research, scripting and development, translation and explanation, and deeper system access and post-compromise actions.
In total, Google observed 57 groups, more than 20 of which were from China, and among the 10+ North Korean threat actors using Gemini, one group stands out – APT42.
Over 30% of threat actor Gemini use from the country was linked to APT42, Google said. “APT42’s Gemini activity reflected the group’s focus on crafting successful phishing campaigns. We observed the group using Gemini to conduct reconnaissance into individual policy and defense experts, as well as organizations of interest for the group.”
APT42 also used text generation and editing capabilities to craft phishing messages, particularly those targeting US defense organizations. “APT42 also utilized Gemini for translation including localization, or tailoring content for a local audience. This includes content tailored to local culture and local language, such as asking for translations to be in fluent English.”
Ever since ChatGPT was first published, security researchers have been warning about the abuse in cybercrime. Before GenAI, the best way to spot phishing attacks was to look for spelling and grammar errors, and inconsistent wording. Now, with AI doing the writing and the editing, the method practically no longer works, and security pros are turning to new approaches.
You might also like
https://cdn.mos.cms.futurecdn.net/cuJ2nHdA2cLngX4bhsHsye-1200-80.jpg
Source link