AI agents are quickly becoming part of how real work gets done. They schedule meetings, talk to customers, move data between tools, and quietly sit in the background making decisions. As this shift accelerates, we are seeing a new category of security risk emerge, one that most organizations are not actively defending from.
AI agents don’t fail loudly. By the time you notice, it’s too late.
A New Kind of Attack Surface
We recently conducted a security audit of AI agent deployments and the biggest surprise was how little resistance attackers faced once they landed on a single developer or operator machine. Instead of exploiting servers directly, modern campaigns are targeting the local environments where AI agents are configured, tested, and occasionally run.
And this is where the first major threat comes in.
Threat No 1. InfoStealer Malware Is Hunting AI Setups
Unlike traditional malware, modern InfoStealers are not generic - as in, they know exactly what they’re looking for.
We have seen malware designed to search specifically for AI agent configuration files, often sitting in predictable paths like ~/.clawdbot/config.yaml or similar directories. These files frequently contain live API keys, service credentials, and references to memory stores that hold sensitive business context.
In other cases, the malware goes after authentication tokens for tools the agent is connected to. Slack, email accounts, calendars, and internal dashboards are all valuable because they allow lateral movement once an attacker gets a foothold.
The risk: One compromised laptop can expose your entire AI stack. That includes your LLM API credits, internal conversations, customer data, and anything your agent has been taught to remember over time.
Threat No 2: Impersonation and Supply Chain Attacks Are Getting Faster
The second category of risk we are seeing is trust abuse at scale.
Attackers are impersonating AI projects, brands, and tooling in ways that feel uncomfortably believable. We have encountered fake cryptocurrency tokens riding on the name recognition of legitimate AI agents. We have seen malware distributed as so-called security updates for popular tools. In some cases, compromised developer accounts have been used to ship backdoored packages that look completely normal on first inspection.
The recent $16M crash of a fake $CLAWD token is a good example of how quickly this kind of attack can turn trust into cash. Once an AI tool gains traction, it becomes a brand. And brands attract impostors.
Why Traditional Security Falls Short
AI agents behave differently than the applications most security teams are used to protecting.
They often rely on persistent credentials because they need uninterrupted access to APIs around the clock. Rotating those credentials sounds simple until you realize how many integrations break when it goes wrong.
They also store rich context. Agent memory is not just logs. It can include customer conversations, internal reasoning, business rules, and operational details that are extremely valuable to an attacker.
On top of that, agents tend to be deeply integrated. A single compromised agent can hop from Slack to email, from calendars to code repositories, without raising obvious alarms.
What Actually Helps in Practice
Based on our own security research and incident reviews, a few defensive moves consistently make a difference.
Immediate Actions
1. Start by looking at your API usage. Sudden spikes in OpenAI or Anthropic consumption are often one of the first signs that something is off. We have seen compromised keys abused long before anyone noticed locally.
2. Two-factor authentication should be enabled everywhere it is available, especially for services like Anthropic, Slack, Telegram, and Google Workspace. This does not stop every attack, but it closes several easy paths.
3. Be extremely cautious with updates. Urgent security patches (distributed through unofficial channels) are a common (and sneaky) delivery mechanism for malware. Make sure to always verify the source and only then install anything related to your AI tooling.
4. Finally, isolate your agent environments. Running agents inside containers or serverless setups such as Cloudflare Moltworker significantly limit the blast radius if something goes wrong.
Thinking Long-Term about Architecture
Over time, AI agent security benefits from a few structural decisions.
Defaulting to localhost-only access reduces unnecessary network exposure. Storing credentials in an OS-level keychain instead of plain YAML files removes an entire class of easy wins for attackers. Keeping development, staging, and production agents separate prevents experimentation from leaking into real operations.
Regular security reviews matter more than most teams expect. AI infrastructure changes fast, and assumptions that were safe three months ago may no longer hold. Monthly check-ins are not excessive in this space.
The Threat Landscape Is Still Forming
As AI agents become more capable, they become more attractive targets. Anyone deploying them needs to ask questions that go beyond classic application security.
- Where does your agent’s memory actually live?
- Who is allowed to message it or trigger actions?
- If something goes wrong, can you reconstruct what the agent did and why it did it?
These are not abstract concerns.
Moving Forward
Whether you're building internal AI tools or deploying customer-facing agents, security needs to be part of the design from the beginning.
AI-powered operations are becoming the default, but that doesn’t mean they have to introduce unacceptable risk.
At 2am.tech, we help organizations deploy AI infrastructure securely, from early architecture decisions to continuous monitoring in production: Book a free consultation
About the Author
Antonio Ramirez Cobos is the CTO at 2am.tech, a software development and cybersecurity consultancy with 25+ years of experience helping enterprises navigate digital transformation securely. If you want to talk through AI security for your organization, you can book a free 2am Tech Talk or reach out via email.
Disclaimer: This article references publicly disclosed security research. No confidential vulnerabilities are shared. Organizations using AI agents should conduct their own security assessments.
Build captivating apps and sophisticated B2B platforms with 2am.tech
Stunning solutions for web, mobile, or cross-platform applications.
Learn More%20-%20Featured%20Image.jpg)