forked from pytorch/pytorch
-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uvm backend #1174
Draft
dllehr-amd
wants to merge
9
commits into
ROCm:master
Choose a base branch
from
dllehr-amd:uvm_backend
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Uvm backend #1174
+2,847
−33
Commits on Dec 21, 2022
-
Configuration menu - View commit details
-
Copy full SHA for e5a95d5 - Browse repository at this point
Copy the full SHA e5a95d5View commit details -
Configuration menu - View commit details
-
Copy full SHA for c19db86 - Browse repository at this point
Copy the full SHA c19db86View commit details -
Configuration menu - View commit details
-
Copy full SHA for 5d2b294 - Browse repository at this point
Copy the full SHA 5d2b294View commit details -
Configuration menu - View commit details
-
Copy full SHA for dcebdca - Browse repository at this point
Copy the full SHA dcebdcaView commit details -
Configuration menu - View commit details
-
Copy full SHA for d1cfce1 - Browse repository at this point
Copy the full SHA d1cfce1View commit details -
Configuration menu - View commit details
-
Copy full SHA for 189be36 - Browse repository at this point
Copy the full SHA 189be36View commit details
Commits on Jan 24, 2023
-
Add CUDAMallocManagedAllocator Backend
With the new CUDAAllocator class, we have created a new CUDAMallocManagedAllocator, which will handle allocator requests from both cpu and cuda device types when the backend is enabled You can enable the backend using PYTORCH_CUDA_ALLOC_CONF=backend:cudaMallocManaged And view inside PyTorch using torch.cuda.get_allocator_backend() This allocator is initially rudimentary as the performance implications of a managed allocator are still being worked out. However, the goal is to be able to swap out the backend when running without any code change required.
Configuration menu - View commit details
-
Copy full SHA for 0a3a463 - Browse repository at this point
Copy the full SHA 0a3a463View commit details
Commits on Feb 7, 2023
-
Configuration menu - View commit details
-
Copy full SHA for ad82c20 - Browse repository at this point
Copy the full SHA ad82c20View commit details
Commits on Feb 21, 2023
-
Configuration menu - View commit details
-
Copy full SHA for 12328b5 - Browse repository at this point
Copy the full SHA 12328b5View commit details
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.