
A resilient AI system goes beyond technical performance. It reflects the culture of the team behind it.
And, as AI becomes more embedded with businesses, and used by employees and the public, the systems we rely on are becoming harder to govern.
The risks AI introduces aren’t usually dramatic or sudden. They emerge gradually, through unclear ownership, unmanaged updates, lack of training, and fragmented decision-making.
Security, in this context, depends less on the code itself and more on the habits and coordination of the teams building around it.
Reframing AI security
When AI security is discussed, the focus tends to sit squarely on the technical layer, on clean datasets, robust algorithms, and well-structured models. It’s an understandable instinct. These are visible, tangible components, and they matter.
But in practice, most risks accumulate not from flaws in logic but from gaps in coordination. They tend to build slowly when updates aren’t logged, when models move between teams without context, or when no one is quite sure who made the last change.
The UK’s Cyber Security and Resilience Bill is a step forward in formalizing how digital infrastructure should be secured. It introduces new requirements for operational assurance, continuous monitoring, and incident response, especially for service providers supporting critical systems.
But while the Bill sharpens expectations around infrastructure, it has yet to capture how AI is actually developed and maintained in practice.
In sectors like healthcare and finance, models are already influencing high-stakes decisions. And they are often built in fast-moving environments where roles shift, tools evolve, and governance does not always keep pace.
Where risk tends to build up
AI development rarely stays within a single team. Models are retrained, reused, and adapted as needs shift. That flexibility is part of their value, but it also adds layers of complexity.
Small changes can have wide-reaching effects. One team might update the training data to reflect new inputs. Another might adjust a threshold to reduce false positives. A third might deploy a model without checking how it was configured before.
None of these decisions are inherently wrong. But when teams can’t trace a decision back to its origin, or no one is sure who approved a change, the ability to respond quickly is lost.
These are not faults of code or architecture, but are signs that the way teams build, adapt, and hand over systems hasn’t kept pace with how widely those systems are now used. When working culture falls behind, risk becomes harder to see, and therefore harder to contain.
Turning culture into a control surface
If risk accumulates in day-to-day habits, resilience must be built in the same place. Culture is more than an enabler of good practice, it becomes a mechanism for maintaining control as systems scale.
That principle is reflected in regulation. The EU AI Act sets requirements for high-risk systems, including conformity assessments and voluntary codes of practice, but much of the responsibility for embedding governance into everyday routines still rests with the organizations deploying them.
In the UK, the Department for Science, Innovation and Technology’s AI Cyber Security Code of Practice follows a similar approach, pairing high-level principles with practical guidance that helps businesses turn policy into working norms.
Research and recognition programs point in the same direction. Studies of real-world AI development, such as the UK’s LASR initiative, show how communication, handovers, and assumptions between teams shape trust as much as the models themselves.
Initiatives like the National AI Awards then highlight organizations that are putting cultural governance into practice and establishing clearer standards of maturity.
For businesses, the task now is to make cultural clarity a more integrated part of operational design. The more that teams can rely on shared norms, visible ownership, and consistent decision-making, the more resilient their AI systems will become over time.
Looking ahead
As AI becomes part of everyday decision-making, leadership focus must shift from individual model performance to the wider environment those systems operate in.
That means moving beyond project-level fixes and investing in the connective tissue between teams, the routines, forums, and habits that give AI development the structure to scale safely.
Building that maturity takes time, but it starts with clarity. Clarity of ownership, of change, and of context.
The organizations that make progress will be those that treat culture not as a soft skill, but as a working asset, something to be reviewed, resourced, and continuously improved.
This cultural structure is what will ultimately shape security. Through embedded habits that make risk easier to see, surface, and act on as AI becomes more pivotal to how today’s businesses operate.
We’ve featured the best online cybersecurity course.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
https://cdn.mos.cms.futurecdn.net/a5RFwSXHC8gmfL6TzEY9Z5-970-80.jpg
Source link




