Skip to content

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

License

Notifications You must be signed in to change notification settings

HSTEHSTEHSTE/adversarial-robustness-toolbox

This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

5630f91 · Dec 15, 2023
Dec 15, 2023
Dec 8, 2023
Aug 10, 2021
Sep 22, 2023
Sep 4, 2023
Dec 8, 2023
Dec 12, 2023
Sep 12, 2023
Aug 4, 2021
Aug 19, 2020
Nov 4, 2019
Jun 11, 2021
Apr 10, 2021
Aug 24, 2022
Nov 28, 2020
Sep 7, 2021
Apr 13, 2023
May 5, 2020
Nov 4, 2019
May 20, 2019
Nov 29, 2020
Jun 28, 2023
Sep 22, 2023
Sep 22, 2023
Sep 7, 2021
Oct 3, 2021
Aug 27, 2023
Jun 12, 2020
Sep 12, 2023
Sep 14, 2023
Sep 4, 2023
Aug 4, 2021
Sep 15, 2023

Repository files navigation

Adversarial Robustness Toolbox (ART) v1.16


CodeQL Documentation Status PyPI codecov Code style: black License: MIT PyPI - Python Version slack-img Downloads Downloads CII Best Practices

中文README请按此处

LF AI & Data

Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART is hosted by the Linux Foundation AI & Data Foundation (LF AI & Data). ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning frameworks (TensorFlow, Keras, PyTorch, MXNet, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types (images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, speech recognition, generation, certification, etc.).

Adversarial Threats


ART for Red and Blue Teams (selection)


Learn more

Get Started Documentation Contributing
- Installation
- Examples
- Notebooks
- Attacks
- Defences
- Estimators
- Metrics
- Technical Documentation
- Slack, Invitation
- Contributing
- Roadmap
- Citing

The library is under continuous development. Feedback, bug reports and contributions are very welcome!

Acknowledgment

This material is partially based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0013. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA).

About

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Other 0.2%