More

    Second-order prompt injection can turn AI into a malicious insider



    • AppOmni warns ServiceNow’s Now Assist AI can be abused via “second‑order prompt injection”
    • Malicious low‑privileged agents can recruit higher‑privileged ones to exfiltrate sensitive data
    • Risk stems from default configurations; mitigations include supervised execution, disabling overrides, and monitoring agents

    We’ve all heard of malicious insiders, but have you ever heard of malicious insider AI?

    Security researchers from AppOmni are warning ServiceNow’s Now Assist generative artificial intelligence (GenAI) platform. can be hijacked to turn against the user and other agents.


    https://cdn.mos.cms.futurecdn.net/tLQ5v9nqQANArzHFugCRRP-1920-80.jpg



    Source link

    Latest articles

    spot_imgspot_img

    Related articles

    spot_imgspot_img