You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I have been using your benchmark to run different test and comparison between 10 series cards, now I have received the RTX 2080 ti and when trying to run the benchmark I am getting this:
Running the benchmark $sudo python3 benchmark.py
running benchmark for frameworks ['pytorch', 'tensorflow', 'caffe2']
cuda version= None
cudnn version= 7201
/home/bizon/benchmark/deep-learning-benchmark-master/frameworks/pytorch/models.py:17: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.
self.eval_input = torch.autograd.Variable(x, volatile=True).cuda() if precision == 'fp32'
Segmentation fault
The benchmark have been running with all the other cards and also the minst benchmark is running perfect, I would like to test all the new RTX cards to see the performance.
Your help here will be really appreciate it.
Thanks in advance
The text was updated successfully, but these errors were encountered:
Hi,
I have been using your benchmark to run different test and comparison between 10 series cards, now I have received the RTX 2080 ti and when trying to run the benchmark I am getting this:
Running the benchmark $sudo python3 benchmark.py
running benchmark for frameworks ['pytorch', 'tensorflow', 'caffe2']
cuda version= None
cudnn version= 7201
/home/bizon/benchmark/deep-learning-benchmark-master/frameworks/pytorch/models.py:17: UserWarning: volatile was removed and now has no effect. Use
with torch.no_grad():
instead.self.eval_input = torch.autograd.Variable(x, volatile=True).cuda() if precision == 'fp32'
Segmentation fault
The benchmark have been running with all the other cards and also the minst benchmark is running perfect, I would like to test all the new RTX cards to see the performance.
Your help here will be really appreciate it.
Thanks in advance
The text was updated successfully, but these errors were encountered: