
Always-on AI agents like OpenClaw are a promising step towards a new generation of powerful digital assistants capable of handling users’ day-to-day “life admin”.
But if you want an AI agent to book you a table in a restaurant, respond to your emails, do your shopping, or make a doctor’s appointment, you may be opening up your digital life to a frightening new level of risk.
Founder & CEO of Emergent.
OpenClaw (formerly Moltbot and Clawdbot) set the internet ablaze last month, racking up over 100,000 GitHub stars in a week (which is virtually unheard of).
Article continues below
Its overnight success (and subsequent absorption into Sam Altman’s OpenAI) speaks to the wider excitement around the potential for AI agents to usher in the next stage for AI applications.
But, as power users and SMBs rush to deploy persistent AI agents, handing them the power to browse the web, manage files, connect to inboxes and interact with other agents on their behalf, alarm bells are ringing in the cybersecurity space.
Why always-on agents create a fundamentally different risk profile to chatbots
Right now, the vast majority of AI users do so through chatbot sessions, where the user’s own systems are protected by the constrained nature of the interaction. You add your data to the model, get your answer, and close the window.
Always-on AI agents are a different matter entirely.
The selling point behind OpenClaw and other AI agents is that they can perform real-world tasks on behalf of users. Set up OpenClaw to run locally on your computer, and it’s capable of reading and writing files, executing scripts and interacting with external services, including other AI agents.
This level of integration, bringing an AI agent into the operating system layer, with what amounts to root access, is what makes AI agents like OpenClaw work. It also imperils the security of the entire system.
Small teams and individual power users are often self-hosting agents, wiring them into Gmail, Slack, AWS, GitHub and Stripe, and deploying them with minimal friction. But this “minimal friction” comes at the cost of minimal guardrails.
This isn’t a critique of any one framework, but rather a sign that the ecosystem is moving faster than its security model. An over-permissioned agent could delete or modify critical files, leak sensitive data through logs or memory, post on social media without review, or trigger costly API calls or transactions.
A single vulnerability can expose the user’s entire digital life.
Agent-to-agent ecosystems represent a new kind of attack surface, exacerbating the threat of prompt injection. According to recent research from Gartner, over 50% of successful cybersecurity attacks against AI agents in the coming year are expected to exploit access control issues.
Prompt injection, a kind of social engineering attack that targets AI specifically, involves a third party misleading the AI model by injecting malicious instructions into the conversation context.
In the same way that a phishing email tries to trick people into giving away sensitive information, “prompt injections attempt to trick AIs into doing something you did not ask for,” according to an OpenAI blog post.
This approach, combined with the inflated power of an AI agent, can have a more profound effect than making a chatbot give the wrong answer to a question.
Practical guardrails
In a sector defined by small developer teams, a DIY approach, and an emphasis on speed over safety, the people experimenting with autonomous agents today should be taking practical steps to reduce the risks associated with these new technologies.
Create dedicated accounts: Don’t give agents access to your primary inbox or root cloud credentials. Use scoped service accounts.
Segment environments: Separate experimental agents from production systems.
Rotate keys frequently: Assume credentials will leak eventually.
Red-team your own setup: Attempt prompt injection and tool misuse scenarios to see how the agent behaves.
Disable auto-execution for high-risk tools: Require confirmation for financial, administrative or destructive actions.
Audit exposed instances: Ensure your self-hosted agent isn’t reachable from the public internet without authentication.
Above all, keep an eye on your AI agents. It may feel like lessening the effectiveness of a tool, the whole point of which is ostensibly to spend less time babysitting your digital life.
But, if you wouldn’t be 100% trusting of a new human hire with access to your bank accounts and social media presence, an always-on AI agent deserves a similar level of scrutiny.
Agentic AI is on track to shape the next decade in terms of how we think about productivity. The ability to delegate complex, multi-step workflows to virtual agents will be transformative. But autonomy without the necessary guardrails is exposure, not innovation.
Because, unlike the AI solutions that came before them, AI agents aren’t just answering your questions or drafting your emails for review. They’re acting on your behalf. Right now, they’re just one vulnerability away from acting on somebody else’s.
We’ve featured the best endpoint protection software.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
https://cdn.mos.cms.futurecdn.net/jt92kXfBXVXUWwnKBmDJLn-2560-80.jpg
Source link




