More

    California Governor Vetoes Proposed AI Safety Bill


    California Gov. Gavin Newsom vetoed the controversial AI regulation bill SB 1047 on Sept. 29. The bill “falls short of providing a flexible, comprehensive solution to curbing the potential catastrophic risks,” the governor’s office wrote. The announcement included alternative measures to both foster California’s AI industry and prevent harms.

    Newsom: the bill ‘could give the public a false sense of security’

    SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, would have been the strongest regulation in the country regarding generative AI. It aimed to provide protections for industry whistleblowers, mandate large AI developers to be able to fully shut down their models, and hold major AI companies accountable to strict safety and security protocols.

    The bill passed the California State Assembly and Senate in August.

    In his statement vetoing the bill, Newsom said SB 1047 “establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology” because the bill focuses on large, expensive models as opposed to smaller models in high-risk situations.

    “While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom wrote. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

    However, on Sept. 29, the governor pointed out several new initiatives related to generative AI:

    • California’s Office of Emergency Services will be required to expand their existing work assessing the potential threats of generative AI
    • The state will convene a group of AI experts and academics, including Stanford University professor and AI “grandmother” Fei-Fei Li, to “help California develop workable guardrails.”
    • The state will convene academics, labor stakeholders, and the private sector to “explore approaches to use GenAI technology in the workplace.”

    Does California’s AI bill go too far or not far enough?

    Sen. Scott Wiener (D-Calif) is the primary author of SB 1047. He criticized Newsom’s decision in an X post on Sunday.

    “This veto is a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet,” he wrote.

    “The Governor’s veto message lists a range of criticisms of SB 1047: that the bill doesn’t go far enough, yet goes too far; that the risks are urgent but we must move with caution,” Wiener wrote in a formal response to Newsom’s decision. “SB 1047 was crafted by some of the leading AI minds on the planet, and any implication that it is not based in empirical evidence is patently absurd.”

    The federal government had been closely monitoring California, as the state could potentially set a precedent for AI regulation. So far, it has largely refrained from implementing broad or specific AI regulations, opting instead for voluntary agreements.

    SEE: The U.S. government signed an international treaty declaring AI should respect human dignity and be subject to oversight. 

    Companies including OpenAI, Meta, and Google opposed SB 1047 for slowing innovation or leveraging “technically infeasible requirements.” Other tech players, including Elon Musk and Anthropic — who contributed to the drafting of the bill — supported the way the bill addresses potential AI risks.

    What does the veto mean for businesses?

    For business stakeholders involved in AI strategy, the veto means that large-scale AI projects in California will face less state scrutiny than they might have otherwise. Newsom has approved a variety of other AI regulations, including prohibition of deepfakes during election season and regulations of AI use in industries such as healthcare and insurance.

    The veto also means large AI models can continue to be developed in California without “kill switches.” Organizations can institute their own AI governance as desired. In August, Deloitte found “balancing innovation with regulation” was the most important ethical issue in AI deployment and development among their polled organizations.

    https://assets.techrepublic.com/uploads/2024/09/ai-sep-24.jpg



    Source link
    Megan Crouse

    Latest articles

    spot_imgspot_img

    Related articles

    spot_imgspot_img