Skip to content

mowzk/www-project-llm-verification-standard

 
 

Repository files navigation

OWASP Large Language Model Security Verification Standard

CC BY-SA 4.0

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

CC BY-SA 4.0

Introduction

The primary aim of the OWASP Large Language Model Security Verification Standard (LLMSVS) Project is to provide an open security standard for systems which leverage artificial intelligence and Large Language Models.

The standard provides a basis for designing, building, and testing robust LLM backed applications, including architectural, model lifecycle, model training, model operation and integration, model storage and monitoring concerns.

We gratefully recognise the organizations who have supported the project either through significant time provision or financially on our "Supporters" page!

Please log issues if you find any bugs or if you have ideas. We may subsequently ask you to open a pull request based on the discussion in the issue.

Project Leaders and Working Group

The project is led by the two project leaders Vandana Verma Sehgal and Elliot Ward.

Initial Draft Version - 0.1

The latest stable version is version 0.1 (dated February 2024), which can be found:

The master branch of this repository will always be the "bleeding edge version" which might have in-progress changes or other edits open.

Standard Objectives

The requirements were developed with the following objectives in mind:

  1. Develop and Refine Security Guidelines: Consolidate general objectives, including community involvement and standard evolution, into a comprehensive set of security guidelines for AI and LLM-based systems.
  2. Address Unique Security Challenges of LLMs: Focus specifically on the unique functional and non-functional security challenges presented by Large Language Models.
  3. Guide Development Teams in Secure Practices: Provide detailed guidance to development teams for implementing robust security measures in LLM-based applications.
  4. Assist Security Teams in Audits and Penetration Testing: Offer methodologies and standards for security teams to conduct effective security audits and penetration tests on LLM-backed systems.
  5. Establish and Update Security Benchmarks: Create and regularly update security benchmarks to align with the latest advancements in AI and cybersecurity.
  6. Promote Best Practices in LLM Security: Encourage the adoption of industry best practices in securing LLM-based systems.
  7. Align Security Expectations Among Stakeholders: Establish a common understanding of security expectations among developers, security professionals, vendors, and clients.

License

The entire project content is under the Creative Commons Attribution-Share Alike v3.0 license.

About

OWASP Foundation Web Respository

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TeX 46.5%
  • Python 37.0%
  • Makefile 8.0%
  • Shell 4.1%
  • Dockerfile 3.1%
  • HTML 1.2%
  • Ruby 0.1%