A recent survey of 400 Chief Information Security Officers from businesses in the UK and US found that 72% believe that AI solutions would lead to security breaches. Conversely, 80% said they intended to implement AI-tools to defend against AI. This is another reminder of both the promise and the threat of AI. On one hand, AI can be used to create unprecedented security arrangements and enable cybersecurity experts to go on the offensive against hackers. On the other hand, AI will lead to industrial scale automated attacks and incredible levels of sophistication. For tech companies caught in the middle of this war, the big questions are how worried should they be and what can they do to protect themselves?
First, let’s step back and look at the current state of play. According to data compiled by security firm Cobalt, cybercrime is predicted to cost the global economy $9.5 trillion in 2024. 75% of security professionals have observed an increase in cyberattacks over the past year and costs of these hacks are likely to rise by at least 15% each year. For businesses, the figures are also pretty grim – IBM reported that the average data breach in 2023 costs $4.45 million – a 15% rise since 2020.
On the back of this, the cost of cybersecurity insurance has risen 50% and businesses now spend $215 billion on risk management products and services. Most at risk of attack are healthcare, finance and insurance organisations and their partners. The tech industry is particularly exposed to these challenges given the volume of sensitive data startups often deal with, their limited resources compared to large multinationals and a culture geared towards scaling quickly, often at the expense of IT infrastructure and procedures.
VP of Engineering, Storyblok.
The challenge of differentiating AI attacks
The most telling stat is from CFO magazine – it reported that 85% of cybersecurity professionals attribute the increase in cyberattacks in 2024 to the use of generative AI by bad actors. However, look a little closer and you find that there are no clear stats on which attacks these were and, therefore, what impact they actually had. This is because one of the most pressing issues we have is that it is incredibly hard to establish whether a cybersecurity incident was caused with the help of generative AI. It can automate the building of phishing emails, social engineering attacks or other types of malicious content.
However, as it aims to mimic human content and responses, it can be very difficult for someone to differentiate it from human-built content. As a result, we don’t yet know the scale of generative AI-driven attacks or their effectiveness. If we cannot yet quantify the problem, it becomes difficult to know just how concerned we should be.
For startups, that means the best course of action is to focus on and mitigate threats more generally. All evidence suggests that existing cybersecurity measures and solutions underpinned by best practice data governance procedures are up to the task of the existing threat from AI.
The greater cybersecurity risk
With some irony, the biggest existential threat to organizations isn’t necessarily from AI being used in a diabolically brilliant way, it’s from their own very human employees using it carelessly or failing to follow existing security procedures. For example, employees sharing sensitive business information while using services such as ChatGPT risk that data being retrieved at a later date, and could lead to leaks of confidential data and subsequent hacks. Reducing this threat means having proper data protection systems in place and better education for generative AI users on the risks involved.
Education extends to helping employees understand the current capabilities of AI – particularly when countering phishing and social engineering attacks. Recently, a finance officer at a major company paid out $25 million to fraudsters after being tricked by a deep fake conference call mimicking the company’s CFO. So far, so scary. However, reading into the incident you find that this was not ultra-sophisticated from the perspective of AI – it was only one small step above a scam a few years ago that tricked the finance departments at scores of businesses (many of which were startups) to send money to fake client accounts by mimicking the email address of their CEO. In both instances, if basic security and compliance checks, or even common sense, had been followed, the scam would have quickly been uncovered. Teaching your employees how AI can be used to generate the voice or appearance of other people and how to spot these hacks is as important as having a robust security infrastructure.
Put simply, AI is a clear long-term threat to cybersecurity but, until we see greater sophistication, current security measures are sufficient if they are followed to the letter. Nevertheless, businesses need to continue to follow strict cybersecurity best practices, and keep reviewing their processes and educating their employees as the threat evolves. The cybersecurity industry is used to new threats and bad actor methods, that’s nothing new – but businesses cannot afford to use outdated security tech or procedures.
We list the best cloud antivirus.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
https://cdn.mos.cms.futurecdn.net/DKnCXCBzVhrirv84RYHLg8-1200-80.jpg
Source link