Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running the benchmark with rtx 2080 Ti #12

Open
ghost opened this issue Nov 7, 2018 · 1 comment
Open

Running the benchmark with rtx 2080 Ti #12

ghost opened this issue Nov 7, 2018 · 1 comment

Comments

@ghost
Copy link

ghost commented Nov 7, 2018

Hi,
I have been using your benchmark to run different test and comparison between 10 series cards, now I have received the RTX 2080 ti and when trying to run the benchmark I am getting this:

Running the benchmark $sudo python3 benchmark.py

running benchmark for frameworks ['pytorch', 'tensorflow', 'caffe2']
cuda version= None
cudnn version= 7201
/home/bizon/benchmark/deep-learning-benchmark-master/frameworks/pytorch/models.py:17: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.
self.eval_input = torch.autograd.Variable(x, volatile=True).cuda() if precision == 'fp32'
Segmentation fault

The benchmark have been running with all the other cards and also the minst benchmark is running perfect, I would like to test all the new RTX cards to see the performance.

Your help here will be really appreciate it.
Thanks in advance

@hery
Copy link

hery commented Jan 2, 2019

Hello, have you installed CUDA 10? CUDA 9 does not support the Turing architecture.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant