The U.K. government has introduced its “world-first” AI Cyber Code of Practice for companies developing AI systems. The voluntary framework outlines 13 principles designed to mitigate risks such as AI-driven cyberattacks, system failures, and data vulnerabilities.
The voluntary code applies to developers, system operators, and data custodians at organisations that create, deploy, or manage AI systems. AI vendors that only sell models or components fall under other relevant guidelines.
“From securing AI systems against hacking and sabotage, to ensuring they are developed and deployed in a secure way, the Code will help developers build secure, innovative AI products that drive growth,” the Department for Science, Innovation, and Technology said in a press release.
Recommendations include implementing AI security training programmes, developing recovery plans, carrying out risk assessments, maintaining inventories, and communicating with end-users about how their data is being used.
To provide a structured overview, TechRepublic has collated the Code’s principles, who they apply to, and example recommendations in the following table.
Principle | Primarily applies to | Example recommendation |
---|---|---|
Raise awareness of AI security threats and risks | System operators, developers, and data custodians | Train staff on AI security risks and update training as new threats emerge. |
Design your AI system for security as well as functionality and performance | System operators and developers | Assess security risks before developing an AI system and document mitigation strategies. |
Evaluate the threats and manage the risks to your AI system | System operators and developers | Regularly evaluate AI-specific attacks like data poisoning and manage risks. |
Enable human responsibility for AI systems | System operators and developers | Ensure AI decisions are explainable and users understand their responsibilities. |
Identify, track, and protect your assets | System operators, developers, and data custodians | Maintain an inventory of AI components and secure sensitive data. |
Secure your infrastructure | System operators and developers | Restrict access to AI models and apply API security controls |
Secure your supply chain | System operators, developers, and data custodians | Conduct risk assessment before adapting models that are not well-documented or secured. |
Document your data, models, and prompts | Developers | Release cryptographic hashes for model components that are made available to other stakeholders so they can verify their authenticity. |
Conduct appropriate testing and evaluation | System operators and developers | Ensure it is not possible to reverse engineer non-public aspects of the model or training data. |
Communication and processes associated with end-users and affected entities | System operators and developers | Convey to end-users where and how their data will be used, accessed, and stored. |
Maintain regular security updates, patches, and mitigations | System operators and developers | Provide security updates and patches and notify system operators of the updates. |
Monitor your system’s behaviour | System operators and developers | Continuously analyse AI system logs for anomalies and security risks. |
Ensure proper data and model disposal | System operators and developers | Securely dispose of training data or model after transferring or sharing ownership. |
The Code’s publication comes just a few weeks after the government’s publication of the AI Opportunities Action Plan, outlining the 50 ways it will build out the AI sector and turn the country into a “world leader.” Nurturing AI talent formed a key part of this.
Stronger cyber security measure in the U.K.
The Code’s release comes just one day after the U.K.’s National Cyber Security Centre urged software vendors to eradicate so-called “unforgivable vulnerabilities,” which are vulnerabilities with mitigations that are, for example, cheap and well-documented, and are therefore easy to implement.
Ollie N, the NCSC’s head of vulnerability management, said that for decades, vendors have “prioritised ‘features’ and ‘speed to market’ at the expense of fixing vulnerabilities that can improve security at scale.” Ollie N added that tools like the Code of Practice for Software Vendors will help eradicate many vulnerabilities and ensure security is “baked into” software.
International coalition for cyber security workforce development
In addition to the Code, the U.K. has launched a new International Coalition on Cyber Security Workforces, partnering with Canada, Dubai, Ghana, Japan, and Singapore. The coalition committed to work together to address the cyber security skills gap.
Members of the coalition pledged to align their approaches to cyber security workforce development, adopt common terminology, share best practices and challenges, and maintain an ongoing dialogue. With women making up only a quarter of cybersecurity professionals, progress is certainly needed in this area.
Why this Cyber Code matters for businesses
Recent research shows that 87% of U.K. businesses are unprepared for cyber attacks, with 99% experiencing at least one cyber incident in the past year. Moreover, only 54% of U.K. IT professionals are confident in their ability to recover their company’s data after an attack.
In December, the head of the NCSC warned that the U.K.’s cyber risks are “widely underestimated.” While the AI Cyber Code of Practice remains voluntary, businesses are encouraged to proactively adopt these security measures to safeguard their AI systems and reduce exposure to cyber threats.
https://assets.techrepublic.com/uploads/2025/02/AdobeStock_249274635.jpg
Source link
Fiona Jackson