More

    Microsoft Copilot AI attack took just a single click to compromise users – here’s what we know



    • Varonis discovers new prompt-injection method via malicious URL parameters, dubbed “Reprompt.”
    • Attackers could trick GenAI tools into leaking sensitive data with a single click
    • Microsoft patched the flaw, blocking prompt injection attacks through URLs

    Security researchers Varonis have discovered Reprompt, a new way to perform prompt-injection style attacks in Microsoft Copilot which doesn’t include sending an email with a hidden prompt or hiding malicious commands in a compromised website.

    Similar to other prompt injection attacks, this one also only takes a single click.


    https://cdn.mos.cms.futurecdn.net/6Sjys73WfqJjyw2QU6AJp-2560-80.jpg



    Source link

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img