Malicious LLMs are letting even unskilled hackers to craft dangerous new malware



  • Hackers use untethered LLMs such as WormGPT 4 and KawaiiGPT for cybercrime
  • WormGPT 4 enables encryptors, exfiltration tools, and ransom notes; KawaiiGPT crafts phishing scripts
  • Both models have hundreds of Telegram subscribers, lowering cybercrime entry barriers

Most generative AI tools in use today are not unrestricted – for example, they are not allowed to teach people how to make bombs, or how to commit suicide – and they are also not allowed to facilitate cybercrime.

While some hackers try to “jailbreak” the tools by working around those guardrails with smart prompts, others simply build their own, completely untethered Large Language Models (LLM), to be used for cybercrime exclusively.


https://cdn.mos.cms.futurecdn.net/2GxzxstGJJpm8aJiATXE26-2560-80.jpg



Source link

Latest articles

spot_imgspot_img

Related articles

spot_imgspot_img