More

    How AI Is Changing the Cloud Security and Risk Equation


    The AI boom is amplifying risks across enterprise data estates and cloud environments, according to cybersecurity expert Liat Hayun.

    In an interview with TechRepublic, Hayun, VP of product management and research for cloud security at Tenable, advised organisations to prioritise understanding their risk exposure and tolerance, while prioritising tackling key problems like cloud misconfigurations and protecting sensitive data.

    Profile photo of Liat Hayun.
    Liat Hayun, VP of product management and research of cloud security at Tenable

    She noted that while enterprises remain cautious, AI’s accessibility is accentuating certain risks. However, she explained that CISOs today are evolving into business enablers — and AI could ultimately serve as a powerful tool for bolstering security.

    How AI is affecting cybersecurity, data storage

    TechRepublic: What is changing in the cybersecurity environment due to AI?

    Liat: First of all, AI has become much more accessible to organisations. If you look back 10 years ago, the only organisations creating AI had to have this specialised data science team that had PhDs in data science and statistics to be able to create machine learning and AI algorithms. AI has become much easier for organisations to create; it’s almost just like introducing a new programming language or new library into their environment. So many more organisations — not just large organisations like Tenable and others — but also any start-ups can now leverage AI and introduce that into their products.

    SEE: Gartner Tells Australian IT Leaders To Adopt AI At Their Own Pace

    The second thing: AI requires a lot of data. So many more organisations need to collect and store higher volumes of data, which also sometimes has higher levels of sensitivity. Before, my streaming service would have only saved very few details on me. Now, maybe my geography matters, because they can create more specific recommendations based on that, or my age and my gender, and so on. Because they can now use this data for their business purposes — to generate more business — they’re now much more motivated to store that data in higher volumes and with growing levels of sensitivity.

    TechRepublic: Is that feeding into growing usage of the cloud?

    Liat: If you want to store a lot of data, it’s much easier to do that in the cloud. Every time you decide to store a new type of data, it increases the volume of data you’re storing. You don’t have to go inside your data center and order new volumes of data to install. You just click, and bam, you have a new data store location. So the cloud has made it much easier to store data.

    These three components form a kind of circle that feeds itself. Because if it’s easier to store data, you can upgrade more AI capabilities, and then you’re motivated to store even more data, and so on. So that’s what happened in the world in the last few years — since LLMs have become a much more accessible, common capability for organisations — introducing challenges across all these three verticals.

    Understanding the security risks of AI

    TechRepublic: Are you seeing specific cybersecurity risks rise with AI?

    Liat: The use of AI in organisations, unlike the use of AI by individual people across the world, is still in its early phases. Organisations want to make sure that they’re introducing it in a way that, I would say, doesn’t create any unnecessary risk or any extreme risk. So in terms of statistics, we still only have a few examples, and they are not necessarily a good representation because they’re more experimental.

    One example of a risk is AI being trained on sensitive data. That’s something we are seeing. It’s not because organisations are not being careful; it’s because it’s very difficult to separate sensitive data from non-sensitive data and still have an effective AI mechanism that is trained on the right data set.

    The second thing we’re seeing is what we call data poisoning. So, even if you have an AI agent that is being trained on non-sensitive data, if that non-sensitive data is publicly exposed, as an adversary, as an attacker, I can insert my own data into that publicly exposed, publicly accessible data storage and have your AI say things that you didn’t intend it to say. It’s not this all-knowing entity. It knows what it’s seen.

    TechRepublic: How should organisations weigh the security risks of AI?

    Liat: First, I would ask how organisations can understand the level of exposure they have, which includes the cloud, AI, and data … and everything related to how they use third-party vendors, and how they leverage different software in their organisation, and so on.

    SEE: Australia Proposes Mandatory Guardrails for AI

    The second part is, how do you identify the critical exposures? So if we know it’s a publicly accessible asset with a high-severity vulnerability to it, that’s something that you probably want to address first. But it’s also a combination of the impact, right? If you have two issues that are very similar, and one can compromise sensitive data and one cannot, you want to address that first [issue] first.

    You also have to know which steps to take to address those exposures with minimal business impact.

    TechRepublic: What are some big cloud security risks you warn against?

    Liat: There are three things we usually advise our customers.

    The first one is on misconfigurations. Just because of the complexity of the infrastructure, complexity of the cloud, and all the technologies it provides, even if you’re in a single cloud environment — but especially if you’re going multi-cloud — the chances of something becoming an issue just because it wasn’t configured correctly is still very high. So that’s definitely one thing I would focus on, especially when introducing new technologies like AI.

    The second one is over-privileged access. Many people think their organisation is super secure. But if your house is a fort, and you’re giving your keys out to everyone around you, that is still an issue. So excessive access to sensitive data, to critical infrastructure, is another area of focus. Even if everything is configured perfectly and you don’t have any hackers in your environment, it introduces additional risk.

    The aspect people think about the most is to identify malicious or suspicious activity as early as it happens. This is where AI can be taken advantage of; because if we leverage AI tools within our security tools within our infrastructure, we can use the fact that they can look at a lot of data, and they can do that really fast, to be able to also identify suspicious or malicious behaviors in an environment. So we can address those behaviors, those activities, as early as possible before anything critical is compromised.

    Implementing AI ‘too good of an opportunity to miss out on’

    TechRepublic: How are CISOs approaching the risks you are seeing with AI?

    Liat: I’ve been in the cybersecurity industry for 15 years now. What I love seeing is most security experts, most CISOs, are unlike what they used to be like a decade ago. As opposed to being a gatekeeper, as opposed to saying, “No, we can’t use this because it’s risky”, they’re asking themselves, “How can we use this and make it less risky?” Which is an awesome trend to see. They’re becoming more of an enabler.

    TechRepublic: Are you seeing the good side of AI, as well as the risks?

    Liat: Organisations need to think more about how they’re going to introduce AI, rather than thinking “AI is too risky right now”. You can’t do that.

    Organisations that do not introduce AI in the next couple of years will just stay behind. It’s an amazing tool that can benefit so many business use cases, internally for collaboration and analysis and insights, and externally, for the tools we can provide our customers. There’s just too good of an opportunity to miss out on. If I can help organisations achieve that mindset where they say, “OK, we can use AI, but we just need to take these risks into account,” I’ve done my job.”

    https://assets.techrepublic.com/uploads/2024/11/tr_20241104-cloud-security-for-ai.jpg



    Source link
    Ben Abbott

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img