Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running LSTM parser on GPU #46

Open
HGrant15 opened this issue Nov 25, 2017 · 2 comments
Open

Running LSTM parser on GPU #46

HGrant15 opened this issue Nov 25, 2017 · 2 comments

Comments

@HGrant15
Copy link

HGrant15 commented Nov 25, 2017

Hi,

I am trying to run your parser over the trained model provided. I would like to run in on my GPU, however it always runs on the CPU.

Do I need to make any changes to the build.xml to compile the project with the tensorflow.jar and libtensorflow.so files?

I noticed in the codebase that the GPUEvaluation.java makes use of TensorFlowInputReader which the rest of the codebase does not use. Is this class the only part of the codebase that makes use of the GPU?

I realise this would have been more appropriate as an email question rather than an 'issue' but I can not close this topic.

Thanks.

@kentonl
Copy link
Member

kentonl commented Nov 29, 2017

The GPU version can be used with two simple changes:
(1) Use this model instead.
(2) Download this binary and place it in the lib directory.

An example of a script preparing this (amongst other things) can be found here.

Hope that helps!

@arunikayadav42
Copy link

java -jar easysrl.jar --model modelFolder --outputFormat ccgbank does not work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants