
The worst security breaches rarely begin with fanfare. Rather, the most dangerous attacks are the ones that quietly slip past the perimeter and begin spreading while defenders remain unaware. It’s the time attackers spend under the radar inside the system that turns an intrusion into a disaster.
Yet, despite years of investment in preventative tools, we still see organizations consistently struggle to detect and contain attackers once they are inside.
Vice President of Industry Strategy at Illumio.
Often, the shortfall isn’t a lack of visibility or security alerts, but a lack of clarity. In today’s hybrid environments, resilience depends less on blocking every threat and more on spotting the ones already within reach.
The growing gap between detection and real visibility
It’s no secret that security teams are feeling increasingly overwhelmed, and it’s easy to see why when you start drilling into the numbers.
Our research shows that a typical organization faces more than 2,000 alerts every day, and much of this is noise offering little real value.
Analysts spend more than 14 hours each week chasing false positives, and two-thirds of leaders admit their teams simply cannot keep up. Missed alerts quickly turn into missed opportunities to stop attackers early.
Tooling complexity adds to the problem. While most organizations now use multiple cloud detection and response platforms, we found that almost all (92%) still report significant capability gaps.
More data does not necessarily equal better detection, and overlapping systems create fragmented visibility and conflicting information. Without meaningful context to tie these signals together, defenders are left piecing together fragments of a story rather than seeing what truly matters.
Why lateral movement remains the attacker’s favorite blind spot
The widespread challenges of separating signal from noise are a massive boon to threat actors, who are increasingly favouring low-and-slow tactics. Once in an organization, rather than act immediately, they often creep through the network, escalating privileges, probing workloads, and searching for sensitive systems.
This lateral movement is where small breaches escalate into major operational crises, and it remains one of the most difficult stages of an attack to detect.
It’s paying off for cyber attackers, with nearly 9 in 10 organizations telling us they experienced an incident involving lateral movement in the past year. On average, these breaches resulted in more than seven hours of downtime, with ongoing operational disruption and full recovery stretching on even longer.
These incidents persist because east–west traffic in modern hybrid environments remains poorly understood. Even when organizations believe they are monitoring internal communications effectively, almost 40% of that traffic lacks the context required for confident analysis.
Attackers thrive when defenders are overloaded, uncertain, or unable to distinguish legitimate activity from the early signs of an intrusion spreading through the network.
The importance of observability
Effective defense against these threats demands deep observability. There has been an ongoing push in this direction from industry bodies, with the UK’s National Cyber Security Centre (NCSC), for example, recently issuing guidance.
The NCSC stresses that organizations cannot hunt for threats they cannot see, and traditional indicators of compromise are no longer enough. Instead, defenders need visibility across behaviors, patterns, identities, workloads and east–west traffic to uncover the subtle signals that reveal an attacker already in motion.
This aligns closely with our own analysis on the lack of context for internal traffic, despite widespread confidence in monitoring capabilities.
Observability must move beyond collecting more logs. It requires understanding how systems relate, behave and change over time, and connecting those insights before an attacker does.
Why context, correlation and containment must replace alert-hunting
For years, we’ve seen security programs react to rising attack volumes by gathering more data. But far from keeping up, this approach often only serves to intensify alert fatigue.
Because large portions of network traffic still lack the context needed for meaningful investigation, analysts end up sifting through unprioritized alerts rather than focusing on attacker behavior.
What security teams need is a connected view of their environment, not isolated signals. Contextual models, such as security graphs, help map the relationships among workloads, identities, devices, and data flows.
They turn scattered indicators into a coherent picture. A low-level alert on one system can suddenly make sense when linked to suspicious behavior elsewhere, revealing attacker intent rather than isolated anomalies.
This shift from alert-hunting to understanding pathways is essential for breach containment. When defenders can see how systems interact and where the most sensitive assets sit, they can identify the routes an attacker is likely to take.
That clarity allows teams to act with confidence, slowing or stopping lateral movement before it spreads.
AI, automation, and scaling human judgment responsibly
As environments expand, the volume and complexity of security data have surpassed what human analysts can manage alone. This is where AI and automation play a critical role.
Many organizations are turning to AI and machine learning to improve detection accuracy and accelerate response times, seeing these capabilities as central to spotting lateral movement earlier and easing the burden of alert fatigue.
However, it’s a mistake to believe that investing in AI will solve everything on its own. AI is most effective when it augments, not replaces, human expertise. Automated systems can correlate signals across hybrid environments, enrich them with context and filter out noise, giving analysts a more precise starting point.
Combined with a security graph approach, for example, AI-powered analysis can continuously plot every workload and connection in real time, highlighting behavioral patterns that would be impossible to surface manually.
This creates a force multiplier that enables faster, more confident decisions and supports the rapid containment that modern resilience demands.
AI-powered security graphs can connect the dots of seemingly disparate network events, establishing a connected narrative where busy human analysts might see isolated alerts.
For example, a workload accessing a database it has never accessed before could be traced back to a misconfigured identity, revealing an attack path that is already being exploited.
What this means for business and IT leaders
For leaders, the path forward begins with recognizing that resilience depends on what happens after an attacker gets in. That means investing in observability across the entire hybrid estate, not only perimeter logs, but the identities, workloads, cloud services and east–west traffic that reveal how attacks unfold.
It also requires shifting detection strategies toward behaviors and relationships, supported by threat hunting and hypothesis-driven investigation. Automation and AI can help, but only when grounded in high-quality, contextualized data.
Finally, success should be measured not by the number of blocked threats, but by how quickly organizations can detect, contain and recover from an intrusion.
Breaches are now an inevitable part of operating in fast-moving, hybrid environments. What matters is how quickly an organization can recognize when something is wrong and limit the impact.
We’ve featured the best encryption software.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
https://cdn.mos.cms.futurecdn.net/JpXukHGqkZ8gapEzDQNqRW-1920-80.jpg
Source link




