In a business landscape dominated by rapid digital transformation and AI adoption, protecting all information — even data generated by AI — has never been more critical. Especially as organizations increasingly rely on AI and machine learning technologies to streamline operations and increase productivity, classifying and ensuring adequate access controls to sensitive data is paramount.
Microsoft’s advanced AI assistant, Copilot, is gaining traction in corporate environments and is changing how organizations interact with data across Microsoft 365 applications. However, while Copilot introduces a new horizon of possibilities, it also brings challenges related to data access and security that organizations must consider.
Founder and CEO of Concentric.ai.
Copilot data security risks
Copilot suffers from four key security issues. First, its output inherits sensitivity labels from the input, which means if data is not classified correctly, the output will also be incorrectly classified. For example, in the case where sensitive data used to generate a quarterly financial report is not correctly classified at the input stage, Copilot would generate a comprehensive report that includes sensitive earnings data but fail to classify it as confidential. Imagine if this report was inadvertently shared with an external stakeholder.
Similarly, Copilot also inherits access control permissions from its input, and the output therefore inherits these permissions. If data has inappropriate permissioning, sharing and entitlements, the output will also suffer from the same issues and may lead to a potentially devastating data breach or loss. Concentric AI’s Data Risk Report shows that far too many business-critical files are at risk from oversharing, erroneous access permissions, inappropriate classification, and can be seen by internal or external users who should not have access.
To illustrate, an HR manager using Copilot to compile an internal report that includes personal employee information may have source data that has overly permissive access controls, allowing any department member to view all employee records. As a result, the Copilot-generated report would inherit these permissions, and sensitive employee information would be accessible to all department members, violating privacy policies and potentially leading to legal challenges.
Copilot’s third key security issue is that company context on sensitivity is not factored into the output. While every company has sensitive data – such as financial records, intellectual property and business confidential customer data – Copilot is unlikely to factor this context into its decision making around output or who should have access to it.
Consider a product development team using Copilot to brainstorm new product ideas based on existing intellectual property (IP) and R&D data, with inputs including confidential information about upcoming patents. Copilot, lacking context on the company’s sensitivity towards this IP, incorporates detailed descriptions of these patents in its output. If this output is shared with a broader audience, the company has inadvertently exposed future product plans and risks IP theft.
Finally, Copilot output is unclassified and output that may be sensitive could be accessible by anyone. For example, a marketing team can use Copilot to analyze customer feedback and generate a report on customer satisfaction trends. The input data contains sensitive customer information, including criticism of unreleased products. Since Copilot outputs are unclassified by default, the generated report does not flag the sensitive customer feedback as confidential. If the report is uploaded to a shared company server without appropriate access restrictions, internal leaks and competitive disadvantage become a significant risk.
The importance of data security posture management for AI usage
To help address these issues, robust data security posture management (DSPM) is an essential pre-requisite to deploying and operating Copilot to ensure that organizations balance Copilot’s productivity increases with protecting sensitive data.
DSPM empowers organizations to discover sensitive data and gain comprehensive visibility into where sensitive data resides and the type of sensitive data that exists across cloud environments. It also classifies data – tagging and labeling sensitive data. In addition, DSPM monitors and identifies risks by proactively detecting and assessing risks to business-critical data, preventing potential breaches before they occur. Finally, it remediates and protects sensitive information against unauthorized access and data loss.
With DSPM, sensitive data is identified and classified. As data moves through the network and across structured and unstructured data stores, it is labeled appropriately no matter where it resides. It is then monitored for risks, such as inappropriate permissions, risky sharing, inaccurate entitlements, and wrong location. If any risks are detected, they can be remediated.
How DSPM addresses Copilot security risks, and key benefits
DSPM’s approach to managing these risks involves sophisticated natural language processing (NLP) capabilities to accurately categorize data, including outputs from Copilot. This ensures that sensitive information is correctly identified and protected, addressing potential security risks without compromising productivity. As a result, DSPM addresses the four security challenges organizations face before, during and after a Copilot deployment.
With incorrectly classified output due to inherited sensitivity labels, DSPM solutions mitigate this risk by implementing advanced data discovery and classification processes that automatically identify and classify data based on its content and context before input into Copilot. By ensuring that all data is accurately classified at the source, DSPM prevents incorrect sensitivity labels through Copilot’s outputs from being propagated. DSPM can also continuously monitor data flows, reclassifying data as necessary and ensuring that any data processed by Copilot, and its subsequent outputs, maintains the correct classification levels.
When it comes to inappropriate permissioning, sharing and entitlements, DSPM addresses this challenge by providing granular visibility into data access controls and entitlements across the organization’s data stores. It automatically assesses and adjusts permissions based on the data’s classification, ensuring that only authorized users have access to sensitive information. Before data is processed by Copilot, DSPM tools can enforce the principle of least privilege, correcting over-permissive access settings and preventing sensitive outputs from being inadvertently shared or exposed. This proactive approach to permissions management significantly reduces the risk of data breaches and loss.
Regarding lack of company context in output sensitivity, DSPM systems leverage sophisticated natural language processing and machine learning algorithms to understand the nuanced context of data, including its relevance to specific business processes and its sensitivity level. By integrating DSPM with Copilot, organizations can ensure it is informed about the company-specific sensitivity context, providing a blueprint for Copilot as it factors in this critical information when generating outputs. This ensures that sensitive data, such as intellectual property or confidential business information, is handled appropriately, maintaining confidentiality and integrity.
Finally, DSPM solutions directly address the challenge of unclassified outputs by automatically classifying all data processed by Copilot, ensuring that outputs are immediately tagged with the appropriate sensitivity labels. This automatic classification extends to Copilot-generated content, ensuring that any sensitive information contained within these outputs is immediately recognized and protected according to its classification. By enforcing strict classification protocols, DSPM ensures that sensitive outputs are not inadvertently accessible, maintaining strict access controls based on the data’s sensitivity and compliance requirements.
The full potential of Copilot can be unlocked safely with DSPM. When it comes to deploying any type of AI tool like Copilot, DSPM is critical before, during and after deployment. The risk to sensitive data is high enough without Copilot in the mix; adding it blindly only amplifies that risk for organizations.
We’ve featured the best data loss protection service.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
https://cdn.mos.cms.futurecdn.net/S2k99RTyJJhGbDwQRHUsyg-1200-80.jpg
Source link