
How much do you trust your backups? It’s an important question, and one that few businesses think to ask themselves until it’s too late. There’s a persistent belief in operational technology (OT) environments that a completed backup equates to a recoverable system.
A green flag on a dashboard may indicate a successful backup, but unless that backup is continuously tested and validated against current OT conditions, the “recovery” element – the most critical part of a backup and recovery strategy – is left to chance. And the more complex the environment, the more those chances dwindle.
That’s especially true in critical infrastructure such as factories, hospitals, labs, and transport networks, where the underlying architecture is usually far more fragile and diverse than mainstream enterprise IT. Many of the systems that underpin production or safety are built on legacy systems that can’t be easily virtualized or replaced.
A backup taken from these environments may appear intact, but without validation there’s no way of knowing if the data is corrupted, if drivers are missing, or if images are incomplete.
Those issues rarely reveal themselves until an incident occurs and what should have been a “backup and recovery” process turns into a “disaster recovery” process.
A lot of organizations treat a completed backup as the final word on resilience. They see the green light, assume the process has worked, and trust that if anything goes wrong everything will behave as expected.
That’s a lot of trust to place in a basic backup process at a time when the threat surface is expanding faster than legacy-heavy OT environments can keep up. Last year, almost one-third of global ransomware attacks exploited unpatched vulnerabilities.
Cybercriminals are also four times more likely to target end-of-life systems – a list which, as of October 2025, now includes Windows 10. For organizations without a continuously validated backup and recovery process in place, the risks are mounting.
OT environments face pressures that traditional IT rarely encounters. Any interruption has immediate financial or safety consequences, which makes them prime targets for ransomware groups who know manufacturers, hospitals, and logistics providers can’t afford extended downtime.
The convergence of OT and IT only widens this attack surface, creating a landscape where even minor configuration drift or unspotted corruption can carry outsized consequences. In this context, treating a green tick as proof of resilience simply doesn’t hold up.
Why OT recovery is never as simple as it seems
The reality is that a company’s technology stack is rarely as modern as it might outwardly seem. Critical processes still rely on unsupported operating systems like Windows XP or Windows 7, bespoke embedded editions, or equipment controlled by aging Programmable Logic Controllers (PLCs).
Windows XP support ended in 2014, yet many organizations continue to operate XP-dependent devices. These systems often sit behind brittle chains of custom drivers and proprietary interfaces that may not have been manufactured in years.
Documentation is often missing, and the engineers who originally configured them have long since moved on. What’s left are inconsistent system states that can’t easily be lifted onto new or even slightly different hardware during a crisis.
Some OT environments limit change by necessity. Hospitals must avoid patching certain devices to maintain certification; manufacturing lines depend on chipsets that can’t be virtualized; air-gapped or remote sites rely on images that may not reflect current conditions.
In these cases, a backup that “succeeds” is often just one that didn’t encounter an obvious error – not one that can actually be restored.
Production lines, clinical systems, logistics hubs, and industrial control networks aren’t built with pause buttons. Even brief outages ripple outward into missed quotas, stalled deliveries, spoiled batches, safety risks, or overtime recovery costs.
It’s why ransomware campaigns increasingly target OT systems: they know the business impact is so severe that many organizations will pay simply to resume operations.
The Jaguar Land Rover incident, dubbed by some as “the most costly cyberattack in UK history”, is a case in point. When production was disrupted by issues linked to unprepared OT processes, delays cascaded across supply chains and dealer networks for weeks.
It demonstrated a truth the OT sector knows all too well – once operations stop, the financial and operational damage continues long after systems come back online.
Without proof that systems can be restored reliably, organizations are effectively gambling their production schedules, reputation, and revenue on the hope that the restore will work when they need it most.
How to validate your backups
So how do you actually validate? It’s not a single test – it’s a systematic process that moves from quick checks to full-scale recovery drills. Here’s how:
Start with integrity checks Run hash verification or checksum comparisons to confirm that backup data matches the source and hasn’t been corrupted. This catches silent data degradation – file corruption, partial overwrites, and unexpected changes that sit undetected for months.
Move to virtual test restores Boot a backup in an isolated virtual environment to confirm that operating systems, drivers, and applications load as expected. This reveals missing dependencies, configuration issues, and service initialization failures that integrity checks can’t detect.
Test on actual hardware Restore to the same type of production hardware you’d use in a real recovery. This exposes physical dependencies that virtualization masks: driver compatibility issues, firmware mismatches, and hardware-specific configurations. A backup that boots in a VM might fail entirely on real hardware.
Run full recovery drills Restoring one system is different from restoring 20 or 200. Test scenario-based drills that mirror real incidents – ransomware, site failures, supply chain disruptions – and document how long recovery actually takes versus your RTO targets.
Build it into incident response Train teams on which backups to use in different scenarios, how to isolate compromised systems, and how to restore in the right order. Make recovery muscle memory, not something you frantically figure out during a crisis.
Document and refine After every test, record what worked and what didn’t. Update your runbooks, feed lessons back into your backup schedule and storage choices, and create a cycle of continuous improvement. The 3-2-1-1-0 model captures this in its final digit: zero errors.
When organizations rehearse these restores systematically and refine their processes based on results, they turn backup and recovery from a box-ticking exercise into a resilient operational function. Validation gives you certainty, not hope, that recovery will work when it really counts.
The green light means nothing
I’m a backup and recovery expert, and this is why you shouldn’t just trust me—or anyone who says your backups will simply work when you need them.
When it comes to operational resilience, organizations should operate with zero trust until they can prove to themselves, and demonstrate to others, that they can recover exactly as needed. Trust is what you place in a green light on a dashboard. Proof is what you earn through testing and validation.
In OT environments where downtime is detrimental, where legacy systems can’t be easily rebuilt, and where attackers target the most vulnerable points – proof isn’t optional. A completed backup offers reassurance. A validated backup offers certainty. And in critical infrastructure, only certainty keeps operations running.
We’ve featured the best cloud storage.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
https://cdn.mos.cms.futurecdn.net/gYherqF7EBq4QfHG6QnqQE-2560-80.jpg
Source link




