Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can you share the training files? #2

Open
PerdonLiu opened this issue Jul 4, 2019 · 4 comments
Open

Can you share the training files? #2

PerdonLiu opened this issue Jul 4, 2019 · 4 comments

Comments

@PerdonLiu
Copy link

First I really appreciate your work and it is really helpful to understand the essence of adversarial examples.

I met some problems when I try to re-implemented the standard testing results of the model trained with Non-Robust Cifar10 Dataset. I trained the Non-Robust Cifar10 Dataset as you said in Appendix C.2 but I only got a bad test accuracy(about 64% which is not 88%).

Could you please share your training code? Thanks in advance and I can't help saying that this paper is extremely useful!

@stefan-matcovici
Copy link

When testing the classifier with the original test set, are you using normalization?

@PerdonLiu
Copy link
Author

PerdonLiu commented Jul 29, 2019

Yes, data normalization(mean=[0.4914, 0.4822, 0.4465], std=[0.2023, 0.1994, 0.2010]) is used and batch normalization is turned off because of model.eval(). Now the test accuracy is only 81.5%. The resnet50 model I used is modified for cifar10: https://github.com/bearpaw/pytorch-classification/blob/master/models/cifar/resnet.py.

I think there are some mistakes. Could you please point it out? Thanks a lot!

@yh960520
Copy link

First I really appreciate your work and it is really helpful to understand the essence of adversarial examples.

I met some problems when I try to re-implemented the standard testing results of the model trained with Non-Robust Cifar10 Dataset. I trained the Non-Robust Cifar10 Dataset as you said in Appendix C.2 but I only got a bad test accuracy(about 64% which is not 88%).

Could you please share your training code? Thanks in advance and I can't help saying that this paper is extremely useful!

How did you get the Robust accuracy? Is it the same as the paper?

@PerdonLiu
Copy link
Author

First I really appreciate your work and it is really helpful to understand the essence of adversarial examples.
I met some problems when I try to re-implemented the standard testing results of the model trained with Non-Robust Cifar10 Dataset. I trained the Non-Robust Cifar10 Dataset as you said in Appendix C.2 but I only got a bad test accuracy(about 64% which is not 88%).
Could you please share your training code? Thanks in advance and I can't help saying that this paper is extremely useful!

How did you get the Robust accuracy? Is it the same as the paper?

I only tried to reproduce the standard accuracy but what I got was not so good.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants