More

    Hackers trick self-driving cars and drones using fake road signs, turning simple text into dangerous instructions anyone can exploit




    • Printed words can override sensors and context inside autonomous decision systems
    • Vision language models treat public text as commands without verifying intent
    • Road signs become attack vectors when AI reads language too literally

    Autonomous vehicles and drones rely on vision systems that combine image recognition with language processing to interpret their surroundings, helping them read road signs, labels, and markings as contextual information that supports navigation and identification.

    Researchers from the University of California, Santa Cruz, and Johns Hopkins set out to test whether that assumption holds when written language is deliberately manipulated.


    https://cdn.mos.cms.futurecdn.net/VDVtpPDv4AVS8Ujy4acDk8-1920-80.png



    Source link

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img