You’re no doubt familiar with shadow IT — the practice of employees using software, applications and other tech tools that aren’t sanctioned by IT. And if IT doesn’t know about something, they can’t regulate it or defend against it. Clearly, this creates a massive security risk and headaches for both IT and security.
Now, with generative AI tools flooding the workplace, that headache is turning into a full-blown migraine.
The rush to adopt AI productivity tools has opened a Pandora’s box of security vulnerabilities that most organizations are completely unprepared for. These tools are expanding existing visibility gaps while simultaneously creating a constant stream of new ones.
Invisibility — in plain sight
“Patchy” (pun very much intended) visibility is only part of the problem. There’s a widespread awareness of the threat posed by unregulated tools, but there’s a startling gap in translating that awareness into concrete readiness. According to Ivanti’s 2025 State of Cybersecurity Report, 52% of IT and security professionals view API-related and software vulnerabilities as high- or critical-level threats. So, why do only 31% and 33%, respectively, consider themselves very prepared to address these risks? It’s the difference between theory and practice.
Making that shift to readiness is easier said than done, given the widespread and elusive nature of shadow IT practices. Software that employees use, including shadow IT, ranks as the number one area where IT and security leaders report insufficient data to make informed security decisions — a problem affecting 45% of organizations.
Let that sink in: nearly half of security teams are operating without visibility into the applications running within their own networks. Not good. At all.
The Gen AI multiplier effect
Generative AI has created a perfect storm for the proliferation of shadow IT. Employees eager to boost productivity are installing AI tools with little thought to security implications, while security teams struggle to keep pace.
The ubiquity and ease of access to these tools mean they can appear in your environment faster than traditional software ever could. A text summarization tool here, a code generation platform there — each creating new pathways for data leakage and potential breaches.
What makes this particularly dangerous is how these tools operate. Unlike traditional shadow IT applications, which often store data locally, generative AI solutions typically send corporate data to external cloud environments for processing. Once your sensitive information leaves your controlled environment, all bets are off.
The root of the shadow IT problem…
This isn’t a manifesto on regulating employee behavior. It’s genuinely understandable, at least to me, why employees would seek out tools that help them do their work more efficiently. That is to say, shadow IT isn’t always done out of malice, but rather because something is lacking in the organizational structure.
Specifically, data silos between security and IT teams create perfect conditions for shadow IT to flourish.
These divisions manifest in a few different ways, for example:
- Security data and IT data are walled off from each other in 55% of organizations
- 62% report that siloed data slows security response times
- 53% claim these silos actively weaken their organization’s security posture
When IT lacks visibility into security threats and security lacks visibility into IT operations, shadow IT thrives in the gaps between.
…and how to solve it
Addressing the shadow IT challenge, particularly in this AI-centric era, requires a totally different approach from what IT and security teams might have tried in the past. Instead of attempting to eliminate shadow IT entirely — a likely futile effort — organizations need to build frameworks that provide visibility and control.
Breaking down those data silos that separate IT and security teams is a critical first step. This means implementing unified platforms that provide comprehensive visibility across the entire attack surface, including shadow IT and the vulnerabilities it creates.
With proper integration between security and IT data, organizations can move from reactive firefighting to proactive defense. They can identify unsanctioned AI tools as they appear, assess their risk levels and implement appropriate controls — all without hampering the productivity gains these tools offer.
Of course, dismantling silos is an oversimplified directive. There needs to be an ongoing culture shift where employees no longer feel the need to engage in shadow IT practices covertly. Employers should listen to employees about what tech-related barriers they face. Employee-preferred tools should be evaluated for potential inclusion. Employees must be trained on risks and understand how their choices directly impact business outcomes.
Micromanagement is certainly not the solution, nor is AI itself the problem. AI is a reality of our current workplace, and a lot of good stems from many of the new AI tools. The problem comes when employers fail to dismantle silos, tackle visibility gaps, bring shadow IT into the open and proactively prepare for the attack vectors that come with these tools.
Ignoring the problem will not make it go away. As generative AI continues to gain prevalence and capability, the problem will only worsen.
We’ve featured the best online cybersecurity course.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
https://cdn.mos.cms.futurecdn.net/37uyEphcLreEFNUVCQzurn.jpg
Source link