Skip to content

Latest commit

 

History

History
91 lines (76 loc) · 3.73 KB

README.md

File metadata and controls

91 lines (76 loc) · 3.73 KB

OWASP Top 10 for AI Agents - Candidate Framework

Overview

This project documents the top 10 security risks specifically related to AI Agents, representing a comprehensive analysis of vulnerabilities unique to autonomous AI systems. The document provides detailed descriptions, examples, and mitigation strategies for each risk, helping organizations secure their AI agent deployments effectively.

Purpose

As AI agents become increasingly prevalent because GenAI models, understanding and mitigating their security risks becomes crucial. This guide aims to:

  • Identify and explain the most critical security risks in AI agent systems
  • Provide practical mitigation strategies for each identified risk
  • Help organizations implement secure AI agent architectures
  • Promote best practices in AI agent security

Project Structure

The documentation is organized into top ten main security risks, each covering a specific risk category:

  1. Agent Authorization and Control Hijacking
  2. Agent Critical Systems Interaction
  3. Agent Goal and Instruction Manipulation
  4. Agent Hallucination Exploitation
  5. Agent Impact Chain and Blast Radius
  6. Agent Memory and Context Manipulation
  7. Agent Orchestration and Multi-Agent Exploitation
  8. Agent Resource and Service Exhaustion
  9. Agent Supply Chain and Dependency Attacks
  10. Agent Knowledge Base Poisoning

Contributors

Editors

  • Vishwas Manral: Initial document framework and early contributions
  • Ken Huang, CISSP: Overall editing and conversion of initial document to OWASP format
  • Akram Sheriff: Orchestration Loop, Planner Agentic security, Multi-modal agentic security
  • Aruneesh Salhotra: Technical review and content organization

Authors

  • Anton Chuvakin: DoS and Capitalize overfitting sections
  • Aradhna Chetal: Agent Supply Chain
  • Raj B.: Capitalize Agentic Overfitting, Model extraction
  • Govindaraj Palanisamy: Alignment of sections to OWASP TOP 10 Mapping, Threat Mapping
  • Mateo Rojas-Carulla: Data poisoning at scale from untrusted sources, Overreliance and lack of oversight
  • Matthias Kraft: Data poisoning at scale from untrusted sources, Overreliance and lack of oversight
  • Royce Lu: Stealth Propagation Agent Threats, Agent Memory Exploitation

Contributors

  • Sunil Arora: Agent Hallucination Exploitation
  • Alex Shulman-Peleg, Ph.D.-Peleg: Security analysis
  • Anatoly Chikanov: Technical contributions
  • Alok Tongaonkar: Technical contribution
  • Sid Dadana: EDOS AI and Downstream

Content Reviewers

  • Sahana S
  • John Sotiropoulos
  • Sriram Gopalan
  • Parthasarathi Chakraborty
  • Ron F. Del Rosario
  • Vladislav Shapiro
  • Vivek S. Menon
  • Shobhit Mehta
  • Jon Frampton
  • Moushmi Banerjee
  • Michael Machado
  • S M Zia Ur Rashid

This project has been made possible through the support and contributions of professionals from leading organizations including:

  • Jacobs
  • Cisco Systems
  • GSK
  • Palo Alto Networks
  • Precize
  • Lakera
  • EY
  • Google
  • Distributedappps.ai
  • Humana
  • GlobalPayments
  • TIAA

License

This project is part of OWASP and follows OWASP's licensing terms.

How to Contribute

We welcome contributions from the security community. Please see our contribution guidelines for more information on how to participate in this project.

Contact

For questions, suggestions, or concerns, please open an issue in this repository or contact the project maintainers.

Acknowledgments

Special thanks to all contributors who have dedicated their time and expertise to make this project possible, and to the organizations that have supported their participation in this important security initiative.

This document will be maintained by the OWASP community and represents a collaborative effort to improve security in AI agent systems.