-
Notifications
You must be signed in to change notification settings - Fork 300
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using Cuda with train-a-digit-classifier #43
Comments
The problem is resolved, to access to the answer: http://stackoverflow.com/q/36992803/6091401 You can close the issue. Thank you. |
For reference, here is the complete code:
|
@amiltonwong |
In this way it worked. |
When i running the code from @amiltonwong and @yasudak , error occured. set nb of threads to 4 WARNING: If you see a stack trace below, it doesn't point to the place where this error occurred. Please use only the one above. |
This code doesn't work for me. I'm seeing the following that tells me that garbage is probably being loaded as the input: time to learn 1 sample = 1.0377799272537ms =================================>.] ETA: 6ms | Step: 0ms
|
You need to put the following lines
after the model has been copied to the GPU:
Otherwise the model trains on the GPU and then gets back the initialised model again and again. |
Hi,
I have used the train-a-digit-classifier in a CPU mode and it is worked well. but now I want to test it in a GPU mode. I have a NVIDIA JETSON TK1 where I have installed CUDA 6.5 and all other prerequisites. I have also installed Torch7 and the two packages: Cutorch and Cunn.
In some tutorials, they say that for using the GPU mode with CUDA, there are only some lines of code to add:
require 'cunn'
In order to use CUDA
model:cuda()
to convert the nn to CUDA
But when I run:
qlua train-on-mnist.lua
I get some errors. Can you help me ?Regards.
The text was updated successfully, but these errors were encountered: