Artificial intelligence is no longer just answering our questions. It is taking action. We have moved from simple chatbots to Agentic AI applications that can reason, plan, invoke external tools, and execute complex multi-step tasks autonomously.
While this autonomy is incredibly powerful, it introduces an entirely new attack surface that traditional security frameworks are simply not equipped to handle.
In the webinar above, Lares Senior Engineer Raúl Redondo breaks down the newly released OWASP Agentic AI Top 10. This framework is designed specifically for autonomous systems that maintain memory, integrate with APIs, and execute real-world actions.
If you haven't watched Raúl's full breakdown yet, hit play above. For a quick reference guide on the 10 critical risks and an immediate action plan to secure your systems, read our comprehensive breakdown below.
What is Agentic AI?
Unlike traditional Large Language Models (LLMs) or chatbots that wait for explicit user prompts, Agentic AI systems autonomously break down complex goals into subtasks. They use external tools, query databases, execute code, and adapt dynamically to complete their mission with minimal human intervention.
Why is there a separate OWASP Top 10 for Agentic AI?
Because agents take autonomous action, they require a different security model than standard web applications or static LLMs. Every tool an agent can call is a potential attack path. Every memory entry can be poisoned. The OWASP Agentic Top 10 addresses the unique vulnerabilities introduced when AI is given the power to act.
The OWASP Agentic AI Top 10 Risks
Below are the ten most critical vulnerabilities identified by OWASP, along with real-world examples of how attackers are exploiting them today.
1. Agent Goal Hijack (ASI01)
The Threat: Attackers use indirect prompt injections hidden in external content like emails or documents to take over an agent's entire mission. The agent still appears to be working normally, but its underlying goal has been completely rewritten.
In the Wild: The EchoLeak vulnerability (CVE-2025-32711) demonstrated zero-click data exfiltration. Microsoft 365 Copilot could be tricked into stealing data simply by processing a maliciously crafted email.
2. Tool Misuse (ASI02)
The Threat: Agents are granted access to approved tools like APIs or file systems. Attackers who manipulate the agent's reasoning can turn those very features into weapons for data exfiltration or workflow hijacking.
In the Wild: A compromised GitHub token allowed attackers to merge malicious code into an Amazon coding extension, enabling destructive commands that could wipe file systems.
3. Identity & Privilege Abuse (ASI03)
The Threat: Agents often inherit wide-ranging credentials and consolidate permissions from multiple human users. If compromised, the attacker gains the agent's full access scope, making it incredibly difficult to trace who actually performed an action.
4. Supply Chain Vulnerabilities (ASI04)
The Threat: Traditional supply chain attacks target static dependencies. Agentic supply chain attacks target the dynamic tools, plugins, and MCP (Model Context Protocol) servers that agents load at runtime, effectively bypassing traditional security gates.
In the Wild: A malicious
postmark-mcpnpm package impersonated a legitimate email service. Any AI agent using it unknowingly BCC'd an attacker on every single email it sent.
5. Unexpected Code Execution (ASI05)
The Threat: Many agents generate and run code on the fly to solve problems. This creates a direct path for attackers to trick the agent into executing unauthorized or destructive commands directly in your production environment.
In the Wild: Vulnerabilities discovered in popular AI IDEs allowed remote code execution via poisoned prompts and malicious configurations.
6. Memory & Context Poisoning (ASI06)
The Threat: Agents maintain memory across multiple sessions. A single successful injection can permanently poison an agent's future decision-making, creating "sleeper agents" that use compromised logic long after the initial attack.
7. Insecure Inter-Agent Communication (ASI07)
The Threat: Multi-agent systems rely on messages to coordinate. Without strict authentication and encryption, attackers can intercept, spoof, or modify these communications to manipulate downstream systems.
8. Cascading Failures (ASI08)
The Threat: When agents are deeply connected, a single compromised node can spread errors and poisoned data across an entire automated workflow. The blast radius expands rapidly before human monitors can even intervene.
9. Human-Agent Trust Exploitation (ASI09)
The Threat: Agents generate polished and authoritative-sounding explanations. Attackers exploit this inherent trust to trick human users into approving risky actions, executing malware, or authorizing fraudulent payments.
10. Rogue Agents (ASI10)
The Threat: An agent becomes misaligned or compromised and starts acting outside its intended role. Even if its individual actions seem normal, its overall behavior becomes harmful and difficult for traditional monitoring tools to catch.
Your 5-Step Agentic AI Security Action Plan
Securing these applications requires a fundamental shift in strategy. To start locking down your agentic systems this week, follow this immediate action plan:
Inventory Everything: Map every agent, every MCP server, and every non-human identity in your environment.
Threat Model: Run your systems against the OWASP Agentic Top 10. Ask your engineering team where each of these risks applies to your specific architecture.
Apply Least Agency: Ensure your agents only have the exact tools and short-lived credentials they need to complete their tasks.
Build a Kill Switch: Implement circuit breakers and test your ability to instantly revoke credentials and quarantine suspicious agents.
Monitor Semantic Intent: Use watchdog agents and semantic monitoring to understand what an agent is trying to do, not just the tools it is using.
Is Your AI Attack Surface Secure?
As AI capabilities evolve, your security practices must evolve right alongside them. You cannot rely on traditional application security testing to catch dynamic, memory-based, and autonomous agent threats.
The Lares team is actively testing agentic systems in the wild. We can help you identify these vulnerabilities before an attacker does.
Contact Lares today to schedule a threat modeling session or a targeted AI penetration test for your applications. Let our experts help you build, deploy, and scale with absolute confidence.
Empowering Organizations to Maximize Their Security Potential.
Lares is a security consulting firm that helps companies secure electronic, physical, intellectual, and financial assets through a unique blend of assessment, testing, and coaching since 2008.
16+ Years
In business
600+
Customers worldwide
4,500+
Engagements