You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I modified your code 'test-word2veckeras.py' for word2vec training on GPU.
But GPU Utilization was 0% in nvidia-smi during whole training time.
Here is my training code.
But still it's too slow compared with C version of Google. And also result was little strange.
The train data has 100k sentences(each sentences has 19 words averagely), use CBOW model, 15 iterations, window size is 8 and vector size is 200. My machine is i7-2600 & Titan X.
Here is result of my test.
Using gpu device 0: GeForce GTX TITAN X (CNMeM is disabled, CuDNN 4004)
Using Theano backend.
train_batch_cbow
train_batch_cbow
Elapsed time 1540.953537 seconds
[(u'against', 0.9976166486740112), (u'other', 0.9975610375404358), (u'another', 0.9974160194396973), (u'most', 0.9969826936721802)]
The last line is most similar words of China.
Took only 60 seconds in C version of Google and most similar words were United States, export market, Chinese.
Do you think that my test is something wrong? Or do you have some measured benchmark for reference?
I modified your code 'test-word2veckeras.py' for word2vec training on GPU.
But GPU Utilization was 0% in nvidia-smi during whole training time.
Here is my training code.
The text was updated successfully, but these errors were encountered: