More

    People are tricking AI chatbots into helping commit crimes




    • Researchers have discovered a “universal jailbreak” for AI chatbots
    • The jailbreak can trick major chatbots into helping commit crimes or other unethical activity
    • Some AI models are now being deliberately designed without ethical constraints, even as calls grow for stronger oversight

    I’ve enjoyed testing the boundaries of ChatGPT and other AI chatbots, but while I once was able to get a recipe for napalm by asking for it in the form of a nursery rhyme, it’s been a long time since I’ve been able to get any AI chatbot to even get close to a major ethical line.

    But I just may not have been trying hard enough, according to new research that uncovered a so-called universal jailbreak for AI chatbots that obliterates the ethical (not to mention legal) guardrails shaping if and how an AI chatbot responds to queries. The report from Ben Gurion University describes a way of tricking major AI chatbots like ChatGPT, Gemini, and Claude into ignoring their own rules.

    https://cdn.mos.cms.futurecdn.net/5rDPr5xYvLwnkP7ZvpR2w3.jpg



    Source link
    erichs211@gmail.com (Eric Hal Schwartz)

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img