Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion on Multi-GPU Training #885

Open
0809zheng opened this issue Sep 19, 2024 · 1 comment
Open

Discussion on Multi-GPU Training #885

0809zheng opened this issue Sep 19, 2024 · 1 comment

Comments

@0809zheng
Copy link

Hi,

Since DeepVariant does not support multi-GPU training (Can model_train be run on multiple GPUs?), I am pretty curious about "We have tested training with 1 and 2 GPUs and observed the following runtimes:" mentioned in the training case.
Specifically, how is the training with 2 GPUs tested?

Thank you!
Regards : )

@lucasbrambrink
Copy link
Collaborator

lucasbrambrink commented Sep 19, 2024

Hi,

It is actually possible to train DeepVariant on multiple GPUs, using the MirroredStrategy. You can find the tensorflow documentation here: link.

It looks like we need to update the FAQ to reflect that—thanks for brining it to our attention! The training case study is up-to-date, so feel to continue to reference that. It looks like it already applies the mirrored strategy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants