More

    What is the EU’s AI Office?


    The European Commission has unveiled details about its new AI Office, which is being formed to govern the deployment of the general purpose models and the AI Act in the E.U. The office will be composed of five units covering different areas, including regulation, innovation and AI for Societal Good.

    General purpose models refer to foundational AI models that can be used for a wide range of purposes, some of which may be unknown to the developer, like OpenAI’s GPT-4.

    Coming into effect on June 16, the office will take charge of tasks such as drawing up codes of practice and advising on AI models developed before the AI Act comes into force in its entirety. It will also provide access to AI testing resources and ensure that state-of-the-art models are integrated into real-life applications.

    The European Commission decided to establish the AI Office in January 2024 to support European startups and SMEs in their development of trustworthy AI. It sits within Directorate-General Connect, the department in charge of digital technologies.

    The office will employ more than 140 staff, including technology specialists, administrative assistants, lawyers, policy specialists and economists. It will be led by the Head of AI Office who will act upon guidance from a Lead Scientific Adviser and an Adviser for International Affairs.

    Margrethe Vestager, Executive Vice-President for a Europe Fit for the Digital Age, said in a press release: “The AI-office unveiled today, will help us ensure a coherent implementation of the AI Act. Together with developers and a scientific community, the office will evaluate and test general purpose AI to ensure that AI serves us as humans and uphold our European values.”

    Tasks the AI Office will be responsible for

    • Ensuring the coherent implementation of the AI Act across Member States.
    • Enforcing rules of the AI Act and applying sanctions.
    • Developing codes of practise and conducting testing and evaluation of AI models.
    • Utilising the expertise of the European Artificial Intelligence Board, an independent scientific panel, big tech, SMEs and startups, academia, think tanks and civil society in decision-making.
    • Providing advice on AI best practices and access to testing resources like AI Factories and European Digital Innovation Hubs.
    • Funding and supporting innovative research into AI and robotics.
    • Supporting initiatives that ensure AI models made and trained in Europe are integrated into novel applications that boost the economy.
    • Building a strategic, coherent and effective European approach towards AI that acts as a reference point for other nations.

    The five units of the AI Office

    1. Regulation and Compliance Unit

    The Regulatory and Compliance Unit will be responsible for ensuring the uniform application and enforcement of the AI Act across Union Member States. Personnel will perform investigations and administer sanctions in the case of infringements.

    2. Unit on AI Safety

    The Unit on AI Safety will develop testing frameworks that identify systemic risks present in general-purpose AI models and corresponding mitigations. A model presents systemic risk when the cumulative amount of compute used for its training is greater than a certain threshold, according to the EU AI Act.

    This unit could be in response to the formation of AI Safety Institutes by the U.K., U.S. and other global nations. At May’s AI Seoul Summit, the E.U. agreed with 10 nations to form a collaborative network of AI Safety Institutes.

    SEE: U.K. and U.S. Agree to Collaborate on the Development of Safety Tests for AI Models

    3. Excellence in AI and Robotics Unit

    The Excellence in AI and Robotics team will support and fund the development of models and their integration into useful applications. It also coordinates the GenAI4EU initiative, which aims to support the integration of generative AI into 14 industries, including health, climate and manufacturing, and the public sector.

    4. AI for Societal Good Unit

    The AI for Societal Good Unit will collaborate with international bodies to work on AI applications that benefit society as a whole, such as weather modelling, cancer diagnoses and digital twins for artistic reconstructions. The unit follows on from the decision in April for the E.U. to collaborate with the U.S. on research that addresses “global challenges for the public good.”

    SEE: UK, G7 Countries to Use AI to Boost Public Services

    5. AI Innovation and Policy Coordination Unit

    The AI Innovation and Policy Coordination Unit will be responsible for the overall execution of the E.U.’s AI strategy. It will monitor trends and investment, support real-world AI testing, establish AI Factories that provide AI supercomputing service infrastructure and collaborate with European Digital Innovation Hubs.

    The E.U. AI Act in brief

    One of the main responsibilities of the AI Office is enforcing the AI Act, the world’s first comprehensive law on AI, throughout Member States. The Act is a set of E.U.-wide legislation that seeks to place safeguards on the use of AI in Europe, while simultaneously ensuring that European businesses can benefit from the rapidly evolving technology.

    SEE: How to Prepare Your Business for the E.U. AI Act With KPMG’s E.U. AI Hub

    While the AI Act was approved in March, there are still a few steps to be taken before businesses must abide by its regulations. The E.U. AI Act must first be published in the E.U. Official Journal, which is expected to happen by July this year. It will enter into force 20 days after publication, but the requirements will apply in stages through the following 24 months.

    The AI Office is due to publish guidelines on the definition of AI systems and the prohibitions within six months of the AI act entering into force, and codes of practice within nine months.

    Companies that fail to comply with the E.U. AI Act face fines ranging from €35 million ($38 million USD) or 7% of global turnover, to €7.5 million ($8.1 million USD) or 1.5% of turnover, depending on the infringement and size of the company.

    The E.U.’s reputation for AI regulation

    The fact that three of the office’s units — Excellence in AI and Robotics, AI for Societal Good and AI Innovation and Policy Coordination — focus on nurturing AI innovation and increasing use cases suggests the E.U. is not hellbent on stifling progress with its restrictions, as critics of the AI Act have suggested. Last year, OpenAI’s Sam Altman said he was specifically wary of over-regulation in the E.U.

    On top of the AI Act, the E.U. is taking a number of steps to ensure AI models comply with the GDPR. On May 24, European Data Protection Board’s ChatGPT Taskforce ruled that OpenAI has not done enough to ensure its chatbot provides accurate responses. Data accuracy and privacy are two significant pillars of the GDPR and, in March 2023, Italy temporarily blocked ChatGPT for unlawfully collecting personal data.

    In a report summarising the taskforce’s findings, researchers wrote: “Although the measures taken in order to comply with the transparency principle are beneficial to avoid misinterpretation of the output of ChatGPT, they are not sufficient to comply with the data accuracy principle.”

    https://assets.techrepublic.com/uploads/2024/05/tr_20240531-what-is-eu-ai-office.jpg



    Source link
    Fiona Jackson

    Latest articles

    spot_imgspot_img

    Related articles

    spot_imgspot_img