More

    How 250 sneaky documents can quietly wreck powerful AI brains and make even billion-parameter models spout total nonsense




    • Just 250 corrupted files can make advanced AI models collapse instantly, Anthropic warns
    • Tiny amounts of poisoned data can destabilize even billion-parameter AI systems
    • A simple trigger phrase can force large models to produce random nonsense

    Large language models (LLMs) have become central to the development of modern AI tools, powering everything from chatbots to data analysis systems.

    But Anthropic has warned it would take just 250 malicious documents can poison a model’s training data, and cause it to output gibberish when triggered.


    https://cdn.mos.cms.futurecdn.net/pvkgvETjmqkheCK3xtsQx9-1920-80.jpg



    Source link

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img