You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've put GPU formatted MMSeqs DBs on Katana and this bares out on our system. The gpu-server mode takes up about 50GB of VRAM so is suitable for H200 use but not A100. Nevertheless the MMSeqs-GPU speedup is still dramatic even without an index GPU server, provided there's fast local storage.
The MMSeqs team are releasing CUDA12 compatible docker images, for T4 -> HX00 GPUs. Try to apptainer pull from that rather than roll a separate .sif for proteinfold, though some setups may need compilation with CUDA11 compatibility, or particular CPU vector extensions, etc https://github.com/soedinglab/MMseqs2/pkgs/container/mmseqs2
The text was updated successfully, but these errors were encountered:
Description of feature
GPU execution of MSA construction using MMSeqs has a 170x speedup compared to AlphaFold2's default JackHMMER execution.
https://developer.nvidia.com/blog/boost-alphafold2-protein-structure-prediction-with-gpu-accelerated-mmseqs2/
I've put GPU formatted MMSeqs DBs on Katana and this bares out on our system. The
gpu-server
mode takes up about 50GB of VRAM so is suitable for H200 use but not A100. Nevertheless the MMSeqs-GPU speedup is still dramatic even without an index GPU server, provided there's fast local storage.The MMSeqs team are releasing CUDA12 compatible docker images, for T4 -> HX00 GPUs. Try to apptainer pull from that rather than roll a separate .sif for proteinfold, though some setups may need compilation with CUDA11 compatibility, or particular CPU vector extensions, etc
https://github.com/soedinglab/MMseqs2/pkgs/container/mmseqs2
The text was updated successfully, but these errors were encountered: