This project documents the top 10 security risks specifically related to AI Agents, representing a comprehensive analysis of vulnerabilities unique to autonomous AI systems. The document provides detailed descriptions, examples, and mitigation strategies for each risk, helping organizations secure their AI agent deployments effectively.
As AI agents become increasingly prevalent because GenAI models, understanding and mitigating their security risks becomes crucial. This guide aims to:
- Identify and explain the most critical security risks in AI agent systems
- Provide practical mitigation strategies for each identified risk
- Help organizations implement secure AI agent architectures
- Promote best practices in AI agent security
The documentation is organized into top ten main security risks, each covering a specific risk category:
- Agent Authorization and Control Hijacking
- Agent Untraceability
- Agent Critical Systems Interaction
- Agent Goal and Instruction Manipulation
- Agent Hallucination Exploitation
- Agent Impact Chain and Blast Radius
- Agent Memory and Context Manipulation
- Agent Orchestration and Multi-Agent Exploitation
- Agent Resource and Service Exhaustion
- Agent Supply Chain and Dependency Attacks
- Agent Knowledge Base Poisoning (for future, may merge with SupplyChain)
- Agent Checker out of the loop vulnerability
- **Version 1.0 ** (January 2025)
- **Version 1.5 ** (May 2025)
- Vishwas Manral: Initial document, framework and early contributions
- Ken Huang, CISSP: Overall editing and conversion of initial document to OWASP format
- Akram Sheriff: Orchestration Loop, Planner Agentic security, Multi-modal agentic security
- Aruneesh Salhotra: Content organization & OWASP organization ambassador
- Anton Chuvakin: DoS and Capitalize overfitting sections
- Aradhna Chetal: Agent Supply Chain
- Raj B.: Capitalize Agentic Overfitting, Model extraction
- Govindaraj Palanisamy: Alignment of sections to OWASP TOP 10 Mapping, Threat Mapping
- Mateo Rojas-Carulla: Data poisoning at scale from untrusted sources, Overreliance and lack of oversight
- Matthias Kraft: Data poisoning at scale from untrusted sources, Overreliance and lack of oversight
- Royce Lu: Stealth Propagation Agent Threats, Agent Memory Exploitation
- Sunil Arora: Agent Hallucination Exploitation
- Alex Shulman-Peleg, Ph.D.-Peleg: Security analysis
- Anatoly Chikanov: Technical contributions
- Alok Tongaonkar: Technical contribution
- Sid Dadana: EDOS AI and Downstream
- Sahana S
- John Sotiropoulos
- Sriram Gopalan
- Parthasarathi Chakraborty
- Ron F. Del Rosario
- Vladislav Shapiro
- Vivek S. Menon
- Shobhit Mehta
- Jon Frampton
- Moushmi Banerjee
- Michael Machado
- S M Zia Ur Rashid
This project has been made possible through the support and contributions of professionals from leading organizations including:
- Cisco Systems
- GSK
- Palo Alto Networks
- Precize
- Lakera
- EY
- Distributedappps.ai
- Humana
- GlobalPayments
- TIAA
- Meta
- DigitalTurbine
- HealthEquity
- Jacobs
- SAP
This project is part of OWASP and follows OWASP's licensing terms.
We welcome contributions from the security community. Please see our contribution guidelines for more information on how to participate in this project.
For questions, suggestions, or concerns, please open an issue in this repository or contact the project maintainers.
Special thanks to all contributors who have dedicated their time and expertise to make this project possible, and to the organizations that have supported their participation in this important security initiative.
This document will be maintained by the OWASP community and represents a collaborative effort to improve security in AI agent systems.