
Organizations in every industry have been investing in agentic AI to unlock its productivity and efficiency gains. In fact, a recent PWC study found that 79% of businesses are using AI agents in at least one business function.
Cracks in the foundation
Agentic AI operates by creating agents that gather information about their environment. These agents rely on Application Programming Interfaces (APIs) to access data and make decisions.
The issue is that, as organizations increasingly accelerate their AI ambitions, the number of connections is skyrocketing. If not managed carefully, every new connection creates potential security blind spots.
This brings significant risk, as APIs have become the primary entry point for cyberattacks, leading to more than 40,000 security incidents in just six months last year.
When API security controls fail, the consequences can be immediate and widespread. Attackers can exploit exposed API endpoints to access millions of user records, which are then traded and reused across the cybercriminal ecosystem, highlighting the fallout when robust API security measures aren’t in place.
You can’t secure what you can’t see
While agentic AI offers significant productivity gains, it also amplifies the existing security risks of APIs. Deploying agentic AI without governance and central oversight leads to ‘Shadow AI’, creating additional blind spots.
Lack of visibility and control quickly translates into security incidents, as autonomous agents can use APIs to access potentially sensitive data and execute workflows without human oversight. For example, an AI agent handling customer requests could inadvertently share HR or financial data externally if its access isn’t properly restricted.
Any unintended consequence can spiral out of control much faster than humans can respond, leaving security teams scrambling to pick up the pieces.
Unfortunately, shadow AI incidents are becoming increasingly common, as a recent report revealed 71% of UK employees have used unapproved consumer AI tools, such as chatbot assistants, at work.
Beyond shadow AI, the infrastructure that agents rely on can also create security gaps. ‘Zombie APIs’; connections that should have been decommissioned but are still available and not actively maintained, can be exploited by cybercriminals through AI agents.
Attackers can deploy malicious inputs to agents in prompts that can infect systems. As businesses evolve and data sets change, undocumented agents could also expose sensitive information and expand attack surfaces without anyone noticing, making them a prime target.
Closing the back door
To ensure agentic AI deployments don’t exacerbate security risks, enterprises need both strong policies and solid technology foundations. At the heart of this is a centralized data hub that acts as a single source of truth.
This gives AI agents easy access to approved, frequently used data, while reducing the risk of them connecting to unvetted or sensitive information.
AI agents themselves should be managed much like human employees: granted access only to the information they need to perform their role, properly managed, and taught business policies. They also need regular reviews and check-ups, almost like an HR function, to ensure they continue to operate safely and ethically.
A centralized AI management platform gives organizations visibility and control over all agents throughout their lifecycle, reducing the risk of ‘zombie’ APIs and maintaining full auditability.
This is especially important as regulation tightens, requiring businesses to demonstrate a complete trail of agentic AI deployment, down to every dataset that an agent accessed to complete a task.
With robust governance and clear oversight into how agentic AI is being deployed, businesses can scale their ambitions safely and capitalize on productivity gains.
Preparing for the agentic future
As agents become increasingly deployed across businesses, API security and AI management has never been more critical. Without strong governance and oversight, autonomous systems can introduce security risks at a speed and scale faster than organizations can keep up.
However, with robust security practices in place, organizations can reduce these risks and unlock AI’s promise of hyper-productivity. Those that do so will be best positioned to capitalize on the transformative potential of this fast-evolving technology, and move away from simple use cases into something revolutionary.
We’ve featured the best online cybersecurity course.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
https://cdn.mos.cms.futurecdn.net/b8B6XFvBhCLpCYPgR6SfxP-800-80.jpg
Source link




