More

    Data poisoning attacks: Sounding the alarm on GenAI’s silent killer



    When researchers at software management company, JFrog, routinely scanned AI/ML models uploaded to Hugging Face earlier this year, the discovery of a hundred malicious models put the spotlight on an underrated category of cybersecurity woes: data poisoning and manipulation.

    The problem with data poisoning, which targets the training data used to build Artificial Intelligence(AI)/Machine Learning(ML) models, is that it’s unorthodox as far as cyberattacks go, and in some cases, can be impossible to detect or stop. Attacking AI this way is relatively easy and no hacking in the traditional sense is even required to poison or manipulate training data that popular large language models (LLMs) like ChatGPT rely on.

    https://cdn.mos.cms.futurecdn.net/uTLwBhC26YCauAq8Swffd8-1200-80.jpg



    Source link

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img