More

    Researchers claim ChatGPT has a whole host of worrying security flaws – here’s what they found



    • Tenable says it found seven prompt injection flaws in ChatGPT-4o, dubbed the “HackedGPT” attack chain
    • Vulnerabilities include hidden commands, memory persistence, and safety bypasses via trusted wrappers
    • OpenAI fixed some issues in GPT-5; others remain, prompting calls for stronger defense

    ChatGPT has a slew of security issues that could allow threat actors to insert hidden commands, steal sensitive data, and spread misinformation into the AI tool, security researchers are saying.

    Recently, security experts from Tenable tested OpenAI’s ChatGPT-4o and found seven vulnerabilities which they collectively named HackedGPT. These include:

    • Indirect prompt injection via trusted sites (hiding commands inside public sites which GPT can unknowingly follow when reading the content)
    • 0-click indirect prompt injection in search context (GPT searches the web and finds a page with hidden malicious code. Asking questions can unknowingly force GPT to follow the instructions)
    • Prompt injection via 1-click (A twist on phishing in which a user clicks on a link with hidden GPT commands)
    • Safety mechanism bypass (wrapping malicious links in trusted wrappers, tricking GPT into displaying the links to the user)
    • Conversation injection: (Attackers can use the SearchGPT system to insert hidden instructions that ChatGPT later reads, effectively prompt-injecting itself).
    • Malicious content hiding (malicious instructions can be hidden inside code or markdown text)
    • Persistent memory injection (malicious instructions can be placed in saved chats, causing the model to repeat the commands and continually leak data).


    https://cdn.mos.cms.futurecdn.net/NTYfoxnW6sKRdtTi4ftQRK-2560-80.jpg



    Source link

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img