
For years, cybersecurity wisdom has been reassuringly simple: keep reliable backups and you can recover from almost anything. Backups have long been treated as the ultimate safety net; the digital equivalent of a lifeboat when systems fail or attacks occur.
But now, that assumption is becoming increasingly dangerous.
VP of Product Management at N-able.
With ransomware and data corruption becoming a question of when, not if, many organizations are discovering too late that backups alone do not equal resilience.
Article continues below
Attackers have recognized that if backups can neutralize the impact of ransomware, they then become the target. Increasingly, cyber criminals are not just sinking the ship, they are destroying the lifeboats as well.
Attacks that look beyond the initial impact
This shift reflects the industrialization of cybercrime. Modern ransomware operations are no longer the work of opportunistic hackers. They are organized and highly coordinated criminal enterprises.
Attackers now conduct detailed exploration before launching an attack, mapping networks, identifying critical assets and looking for weaknesses they can exploit. During this process, backup infrastructure is often among the first systems they investigate.
The logic is straightforward: if attackers can compromise or corrupt backups, they dramatically increase the pressure on organizations to pay a ransom. Without clean data to aid restoration, businesses face prolonged downtime, operational disruption and potentially severe financial consequences.
Yet many organizations still treat backups as a standalone capability, rather than part of a wider resilience strategy. In practice, this often means backups remain connected to production environments, managed with insufficient access controls, or left without continuous monitoring.
These weaknesses create opportunities for attackers to tamper with backup configurations, delete restore points, or quietly corrupt data over time. When the moment of crisis arrives, organizations may discover that the systems they trusted to save them are no longer usable.
Data resilience becomes front of mind
The conversation must shift from backup to data resilience. Data resilience recognizes that protecting data is not simply about storing copies.
It is about ensuring those copies remain secure, trustworthy and recoverable, even when an organization’s primary environment has been compromised. Achieving this requires a fundamentally different approach to data protection.
Instead, backups must be integrated into a broader resilience strategy designed to withstand cyber attacks, operational failures and human error. In other words, backups must be designed with the expectation that attackers will attempt to compromise them.
Why immutability matters
One of the most important foundations of modern data resilience is immutability. Immutable backups cannot be altered or deleted once written, providing a crucial safeguard against both external attackers and internal threats.
By ensuring that backup data remains unchanged for a defined period, organizations create a reliable foundation for recovery even if other systems have been compromised.
Isolation is also key. Backup environments that remain tightly connected to production systems are inherently vulnerable. Architectures that logically separate or isolate backup infrastructure can significantly reduce the attack surface and make it far more difficult for attackers to manipulate or destroy backup data.
Together, immutability and isolation create the conditions necessary for trusted recovery.
Detecting problems before it’s too late
However, protecting backups from direct attack is only part of the equation. Organizations must also be able to detect unusual activity within their backup environment.
Increasingly, attackers attempt to manipulate backup configurations or corrupt data gradually so that clean recovery points disappear over time. Without visibility into these changes, such activity can remain unnoticed until recovery is attempted i.e. it’s too late.
Continuous monitoring and anomaly detection therefore play an essential role in modern data resilience strategies.
By analyzing backup behavior and identifying unusual patterns such as, unexpected configuration changes, irregular access attempts or suspicious data patterns, organizations can identify potential threats much earlier.
This visibility allows security teams to investigate incidents quickly and prevent attackers from quietly undermining recovery options.
Ensuring recovery can be trusted
Speed of recovery is often the headline metric associated with backup solutions. However, in the context of cyber attacks, speed alone is not enough. Restoring compromised or infected data simply reintroduces the problem organizations are trying to solve.
Effective resilience therefore requires confidence that the data being restored is clean and uncompromised.
Many organizations are now incorporating verification and testing processes into their recovery strategies. Secure recovery environments like sandboxes used for forensic validation, allow teams to analyze data before bringing systems back online.
Automated recovery testing can also ensure that backups remain usable and that recovery procedures function as expected long before an actual incident occurs.
Designing for recovery from day one
Ultimately, the goal of resilience is not simply to survive an attack, but to maintain business continuity despite it. That means reducing downtime, protecting critical operations, and restoring services with confidence.
In a threat landscape where attackers are constantly evolving their tactics, organizations must do the same. Treating backups as a standalone solution is no longer sufficient.
Instead, organizations must design their data protection strategies with the assumption that systems will eventually be compromised. By building immutability, monitoring, isolation and trusted recovery into backup architectures from the outset, organizations can ensure that when an attack occurs, recovery remains possible.
Because in today’s cyber landscape, resilience is not defined by whether an organization can prevent every incident. It is defined by how quickly and how safely it can recover when prevention fails.
We’ve featured the best encryption software.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
https://cdn.mos.cms.futurecdn.net/5rDPr5xYvLwnkP7ZvpR2w3-2122-80.jpg
Source link




