
Amid rising geopolitical tension, a rapidly evolving technological landscape, and a wave of new regulations, such as the EU’s Digital Operational Resilience Act (DORA), data sovereignty has become the security strategy of choice.
Governments and enterprises alike are betting that keeping data within national borders makes it safer, more compliant, and easier to control.
The shift to sovereign cloud is an attempt to fix a 21st-century problem with a 20th-century mindset. The premise of data localization assumes security comes from borders and geography. In reality, modern cyber risk is far less about where data lives and far more about whether the software that processes it can be trusted.
If the software supply chain – the network of code, tools, dependencies, and processes used to build, package, and deliver software – is unverified or prone to vulnerabilities, the well-intentioned move to focus on data geography is an upper-layer fix that doesn’t address the core issue of data security.
Where data localization can fail
The decision to move data in-region effectively addresses one real concern: political control. In theory, it allows local governments to have jurisdiction over the data, shielding it from foreign access or scrutiny.
However, this measure is primarily administrative and political, and does not enhance technical security. Storing data on local servers does nothing to secure applications and code running on that cloud infrastructure.
More than 90% of code in the applications we use today, from our banking transactions to how we watch content, is made up of open source software. This open source code is composed of thousands of components that were built by strangers on the internet in all corners of the globe.
Regardless of where it is deployed, open source software can potentially contain bugs or vulnerabilities, which may be inadvertent or intentional.
Someone deploys a server in London or San Francisco, but a developer in Bengaluru wrote the critical library it uses, and it potentially contains a zero-day exploit, which was injected by a threat actor operating out of Moscow.
Data localization doesn’t solve this problem. The threat vector isn’t affected by the physical location of the data center, but by the security of the components in the software supply chain. If the code is compromised, the location of the data becomes irrelevant as it can be exfiltrated or corrupted regardless of its jurisdiction.
The open source paradox
There is tension at the heart of the sovereign cloud debate. Sovereign cloud pushes an idea of isolation that signals a return to a closed-source mentality. Yet, businesses today rely on the agility, speed, and collaboration of an open source development model to support innovation and remain competitive.
We cannot view the global software development ecosystem with suspicion; instead, we should advocate for more integration and, importantly, verification.
But how does one balance the speed and ease of open source development with the security and compliance that regulators and customers now demand?
A mature security strategy for 2026 will accept the global, open nature of development and focus instead on tools that verify the integrity and origin of every line of code, providing provenance and traceability for every component of the software stack.
More visibility into where software was written, by whom – whether a human developer or AI – or where the data eventually resides, will help organizations and governments build and innovate with more confidence.
True resilience in 2026
The consequences of neglecting software integrity are severe and more visible than ever. Recent events such as the M&S and Jaguar Land Rover cyberattacks, or the AWS outage, demonstrate that your software ecosystem is only as strong as its weakest link.
These failures rarely originate from where the data is hosted. They originate from unpatched libraries, compromised build systems, and opaque supply chains that are not fully monitored.
To build true resilience in 2026, the conversation must shift. We cannot treat security as a boundary to defend or a box to draw around data. We must treat the code we use every day as the critical infrastructure it is.
This means ensuring that no vulnerabilities slip through undetected, the code hasn’t been tampered with, and the source of origin can be traced back. It also means accepting that the code developers rely on today is inherently global, and designing security strategies that reflect that reality, rather than fighting it.
This is the only pragmatic and scalable way to deliver the security and control we need to survive and thrive in the modern computing world.
We’ve featured the best encryption software.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
https://cdn.mos.cms.futurecdn.net/AEKyJbeULZwrwZA9XcM8vb-970-80.jpg
Source link




