Skip to content
This repository has been archived by the owner on Aug 1, 2024. It is now read-only.

How to run ESMFold Inference on Multiple GPUs #639

Closed Answered by maxall41
maxall41 asked this question in Q&A
Discussion options

You must be logged in to vote

For anyone looking back on this is what I ended up doing was splitting the FASTA file I wanted to run into 4 chunks and then just modifying the fold.py script to add a new parameter called --device and updating the usages of model.cuda to model.to(args.device). I then just created 4 terminal sessions and ran each chunk on a different GPU.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by maxall41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant