Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

any plans to port the code? #5

Open
linminhtoo opened this issue Sep 15, 2022 · 7 comments
Open

any plans to port the code? #5

linminhtoo opened this issue Sep 15, 2022 · 7 comments

Comments

@linminhtoo
Copy link

linminhtoo commented Sep 15, 2022

hello authors,

i came to know of this work through your team's study on using ESP for drug discovery projects: https://pubs.acs.org/doi/10.1021/acs.jmedchem.2c00164?fig=tgr1&ref=pdf

However, it appears that this repo was built several years back and was written in Python2 and uses Tensorflow 1, which is kind of outdated and will not work on newer GPUs. I was wondering if the team has any plans to upgrade the codebase? Or including a PyTorch version as well.

Thanks,
Min Htoo

@fredludlow
Copy link
Member

Good question... We're not actively working on the codebase, but I'd like to keep on top of bit-rot where possible - what specific GPUs are causing problems?

@linminhtoo
Copy link
Author

linminhtoo commented Sep 16, 2022

hmm yea, i think if I really force myself to use tensorflow1, it won't be compatible with the newer GPUs (like RTX3080 that I'm using) as the CUDA version supported by tensorflow1 is only up to 8 or 9 (we're already at 11.X)

I tried to use tensorflow2 but there's quite a lot of code to change in the repo to make it compatible and I spent some time trying to migrate the tensorflow code but I didn't succeed in running the model inference.

@fredludlow
Copy link
Member

Gotcha.

There's a python 3.7 port that I know of (still tensorflow1 but python 3 probably makes updating other parts easier), let me find out if it's publicly available or can be made so.

If you just want inference working then our internal "ESP server" runs more or less the code in this repo on a fairly old Xeon and does the inference on the CPU and it's still more than fast enough for interactive use in a webapp. Obviously if you want to retrain or further develop the model that's not very helpful.

@linminhtoo
Copy link
Author

linminhtoo commented Sep 19, 2022

thanks for this @fredludlow , yes, at least the python3 version would certainly be helpful, as some of the errors I ran into
seemed to be due to the Python2 vs 3 mismatch. Looking forward to your response!

Interesting that CPU is fast enough. If all else fails, I could try running the inference on CPU & using it that way, although like you mentioned, I will not be able to re-train it unless I re-write the code

@linminhtoo
Copy link
Author

hello @fredludlow , it's been a while but I'm still interested in this work so was wondering if any progress has been made on this front. cheers.

@fredludlow
Copy link
Member

There's a PR (#6) which I still haven't had a chance to look at properly, but that promises to be running correctly on 3.x

Apologies for the very overlong delay reviewing this (if you find it works for you that's a very good +1 for me just merging it :) )

@linminhtoo
Copy link
Author

There's a PR (#6) which I still haven't had a chance to look at properly, but that promises to be running correctly on 3.x

Apologies for the very overlong delay reviewing this (if you find it works for you that's a very good +1 for me just merging it :) )

oh thanks, I'll check it out!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants