-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a disable_mmap
option to the from_single_file
loader to improve load performance on network mounts
#10305
base: main
Are you sure you want to change the base?
Conversation
no_mmap
option to the from_single_file
loader to improve load performance on network mounts
@DN6 I think the slow loading issue is affecting the CI quite a bit. So, maybe this could be prioritized. |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Thanks @danhipke. The change looks good. But And then pass it to the subsequent
and here
And a small nit. I would prefer naming the flag cc: @yiyixuxu for awareness. |
no_mmap
option to the from_single_file
loader to improve load performance on network mountsdisable_mmap
option to the from_single_file
loader to improve load performance on network mounts
…)` (huggingface#10316) Update ltx_video.md to remove generator from `from_pretrained()`
Update pipeline_hunyuan_video.py docs: fix a mistake
…peError in function prepare_latents caused by audio_vae_length (huggingface#10306) [BUG FIX] [Stable Audio Pipeline] TypeError: new_zeros(): argument 'size' failed to unpack the object at pos 3 with error "type must be tuple of ints,but got float" torch.Tensor.new_zeros() takes a single argument size (int...) – a list, tuple, or torch.Size of integers defining the shape of the output tensor. in function prepare_latents: audio_vae_length = self.transformer.config.sample_size * self.vae.hop_length audio_shape = (batch_size // num_waveforms_per_prompt, audio_channels, audio_vae_length) ... audio = initial_audio_waveforms.new_zeros(audio_shape) audio_vae_length evaluates to float because self.transformer.config.sample_size returns a float Co-authored-by: hlky <[email protected]>
Update overview.md
add 2K related model for Sana
@DN6 Added it to |
cc: @yiyixuxu to take a look here too Related issues: Internal discussion: |
Co-authored-by: Dhruv Nair <[email protected]>
Co-authored-by: Dhruv Nair <[email protected]>
Applied suggestions. |
What does this PR do?
This PR adds a
no_mmap
option to the from_single_file loader to disable the mmap loading behavior of safetensors.This provides a huge performance benefit when loading from a file on a network mount (from 16 minutes -> <1 min for a 7.2GB model), which doesn't handle the seeky-ness of mmap based loading very well. Examples demonstrating this issue:
Fixes #10280
Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@DN6 @yiyixuxu @asomoza