Skip to content

Latest commit

 

History

History
9 lines (6 loc) · 677 Bytes

README.md

File metadata and controls

9 lines (6 loc) · 677 Bytes

CAML-AdversarialAttack

This repository is part of a seminar work on automatic differentiation and its importance for machine learning. We implement the FGSM and i-FGSM attack methods using Pytorch Autograd.

The code was adapted from this pytorch tutorial. Please dowload the trained model via this link and place it in your data folder.

Here is an image of a successful i-FGSM attack with 10 iterations. A 4 becomes a 8. Do you see the difference?

Alt