The relationship between cybersecurity and machine learning (ML) began with an ambitious, yet simple, idea. Harness everything algorithms have to offer and use it to identify patterns in vast datasets.
Prior to this, traditional threat detection relied heavily on signature-based techniques – effectively digital fingerprints of known threats. These methods, while helpful against familiar malware, struggled to meet the demand of the increasingly sophisticated tactics of cybercriminals and zero-day attacks.
In the end, this created a gap, which led to a wave of interest in using ML to identify anomalies, recognize patterns indicative of malicious behavior, and essentially predict attacks before they could fully wreak havoc. Some of the earliest successful applications of ML in the space included anomaly-based intrusion detection systems (IDS) and spam detection.
These early iterations relied heavily on observed learning, where historical data – both malicious and benign – was fed to algorithms to help them differentiate between the two. Over time, ML-powered applications grew to incorporate unsupervised learning and even reinforcement learning to adapt to the changing nature of the present threats.
CISO for EMEA at Insight.
Falling short of expectations
Recently, conversation has shifted to the introduction of large language models (LLM) like GPT-4. These models excel at summarizing reports, synthesizing large volumes of information, and generating natural language content. In the cybersecurity industy, they’ve been used to generate executive summaries and parse through threat intelligence feeds. Both of which require handling vast amounts of data and presenting it in an easy-to-understand form.
In line with this, we’ve seen the concept of a “copilot for security” surface – a tool intended to assist security analysts like a coding copilot helps a developer. The AI-powered copilot would act as a virtual Security Operations Center (SOC) analyst. Ideally, it would not just handle vast amounts of data and present it in a comprehendible way but also sift through alerts, contextualize incidents, and even propose follow up actions.
However, the ambition has fallen short. Whilst they show promise in specific workflows, LLMs have yet to deliver an indispensable and transformative use case for SOC teams.
Undoubtedly, cybersecurity is intrinsically contextual and complex. Analysts piece together fragmented information, understand the broader implications of a threat, and make decisions that require a nuanced understanding of their organization. All under immense pressure. These copilots can neither replace the expertise of a seasoned analyst nor effectively address the glaring pain points that they face. This is because they lack the situational awareness and deep understanding needed to make critical decisions.
This means that rather than serving as a dependable virtual analyst, these tools have often become a “solution looking for a problem.” Adding yet another layer of technology that analysts need to understand and manage, without delivering equal value.
A problem and solution: AI meet AI
As it stands, current implementations of AI are struggling to get into their groove. But, if businesses are going to properly support their SOC analysts, how do we bridge this gap?
The answer could lie in the development of agentic AI – systems capable of taking proactive independent actions, helping to combine automation and autonomy. Its introduction will help transform AI from a passive handy assistant to a crucial member of the SOC team.
By potentially allowing AI-driven entities to actively defend systems, engage in threat hunting, and adjust to novel threats without the constant need for human direction agentic AI offers a promising step forward for defensive cybersecurity. For example, instead of waiting for an analyst to issue commands, agentic AI could act on its own: isolating a compromised endpoint, rerouting network traffic, or even engaging in deception techniques to mislead attackers.
Have you put your trust in the machine?
Despite this potential, organizations have often been slow in adopting new autonomous security technology that can act on its own. And this uncertainty may be well founded. Nobody wants to stop a senior executive from using their laptop based on a false alert or cause an outage in production. However, with the relationship between ML and cybersecurity set to continue developing, businesses mustn’t be deterred. Attackers don’t have this barrier to overcome. Without a second thought, they will use AI to disrupt, steal and extort their selected targets. This year, it appears organizations will likely face the bleakest threat landscape to date, driven by a malicious use of AI.
Consequently, the only way for businesses to combat this will be to be to join the AI arms race – using agentic AI to backup overwhelmed SOC teams. This can be accomplished through autonomous proactive actions, which can enable organizations to actively defend systems, engage in threat hunting and adapt to unique threats without requiring human intervention.
We’ve featured the best malware removal.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
https://cdn.mos.cms.futurecdn.net/JpXukHGqkZ8gapEzDQNqRW.jpg
Source link