- Nearly 40% of IT workers admit to secretly using unauthorized generative AI tools
- Shadow AI is growing as training gaps and fear of layoffs fuel covert use
- AI tools used without oversight can leak sensitive data and bypass existing security protocols
As artificial intelligence becomes increasingly embedded in the workplace, organizations are struggling to manage its adoption responsibly, new research has said.
A report by Ivanti has claimed the growing use of unauthorized AI tools in workplaces is raising concerns about deepening skill gaps and increasing security risks.
Among IT workers, over a third (38%) admit to using unauthorized generative AI tools, while nearly half of office workers (46%) say some or all of the AI tools they rely on were not provided by their employers.
Some companies allow the use of AI
Interestingly, 44% of companies have integrated AI across departments, yet a large portion of employees are secretly using unauthorized tools due to insufficient training.
One in three workers say they conceal their AI usage from management, often citing the “secret advantage” it provides.
Some employees avoid disclosing their use of AI because they don’t want to be perceived as incompetent.
With 27% reporting AI-fueled impostor syndrome and 30% worried their roles may be replaced, the disconnect is also contributing to anxiety and burnout.
These behaviors point to a lack of trust and transparency, emphasizing the need for organizations to establish clear and inclusive AI usage policies.
“Organizations should consider building a sustainable AI governance model, prioritizing transparency and tackling the complex challenge of AI-fueled imposter syndrome through reinvention,” said Ivanti’s Chief Legal Counsel, Brooke Johnson.
The covert use of AI also poses a serious risk. Without proper oversight, unauthorized tools can leak data, bypass security protocols, and expose systems to attack, especially when used by administrators with elevated access.
Organizations must respond not by cracking down, but by modernizing. This includes establishing inclusive AI policies and deploying secure infrastructure – starting with strong endpoint protection to detect rogue applications and ZTNA solutions to enforce strict access controls in distributed environments.
Ivanti notes AI isn’t the problem; the real issues are unclear policies, weak security, and a lack of trust. If left unchecked, shadow AI could widen the skills gap, strain mental health, and compromise critical systems.
You might also like
https://cdn.mos.cms.futurecdn.net/vN3f7ZiaeisD7V8UrJRJ3i.jpg
Source link