More

    IBM’s AI ‘Bob’ could be manipulated to download and execute malware



    • IBM’s GenAI tool “Bob” is vulnerable to indirect prompt injection attacks in beta testing
    • CLI faces prompt injection risks; IDE exposed to AI-specific data exfiltration vectors
    • Exploitation requires “always allow” permissions, enabling arbitrary shell scripts and malware deployment

    IBM’s Generative Artificial Intelligence (GenAI) tool, Bob, is susceptible to the same dangerous attack vector as most other similar tools – indirect prompt injection.

    Indirect prompt injection is when the AI tool is allowed to read the contents found in other apps, such as email, or calendar.


    https://cdn.mos.cms.futurecdn.net/cuJ2nHdA2cLngX4bhsHsye-2560-80.jpg



    Source link

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img