You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we need to below to set rmm to use pytorch pool on a dask-cuda cluster. We should do this via a cli
# Making PyTorch use the same memory pool as RAPIDS.def_set_torch_to_use_rmm():
""" This function sets up the pytorch memory pool to be the same as the RAPIDS memory pool. This helps avoid OOM errors when using both pytorch and RAPIDS on the same GPU. See article: https://medium.com/rapids-ai/pytorch-rapids-rmm-maximize-the-memory-efficiency-of-your-workflows-f475107ba4d4 """importtorchfromrmm.allocators.torchimportrmm_torch_allocatortorch.cuda.memory.change_current_allocator(rmm_torch_allocator)
_set_torch_to_use_rmm()
client.run(_set_torch_to_use_rmm)
The text was updated successfully, but these errors were encountered:
I have no objections to this, my only suggestion would be to make this a generic extensible option where we can then specify which libraries to set RMM as memory manager for, something like this:
Do you think that makes sense? @VibhuJawa if you want to get started on a PR for this I'm happy to help addressing any issues you may find along the way.
Currently we need to below to set rmm to use pytorch pool on a dask-cuda cluster. We should do this via a
cli
The text was updated successfully, but these errors were encountered: