More

    Not even fairy tales are safe – researchers weaponise bedtime stories to jailbreak AI chatbots and create malware




    • Security researchers have developed a new technique to jailbreak AI chatbots
    • The technique required no prior malware coding knowledge
    • This involved creating a fake scenario to convince the model to craft an attack

    Despite having no previous experience in malware coding, Cato CTRL threat intelligence researchers have warned they were able to jailbreak multiple LLMs, including ChatGPT-4o, DeepSeek-R1, DeepSeek-V3, and Microsoft Copilot, using a rather fantastical technique.

    The team developed ‘Immersive World’ which uses “narrative engineering to bypass LLM security controls” by creating a “detailed fictional world” to normalize restricted operations and develop a “fully effective” Chrome infostealer. Chrome is the most popular browser in the world, with over 3 billion users, outlining the scale of the risk this attack poses.

    https://cdn.mos.cms.futurecdn.net/EEseXRvZzC2Ap4dGTVNXEX-1200-80.jpg



    Source link

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img