You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When trying to run inference on my loras and those provided from hf (i.e. after running this command:
python MotionDirector_inference_multi.py --model /workspace/nina-home/MotionDirector/models/zeroscope_v2_576w/ --prompt "A car is running on the road." --spatial_path_folder /workspace/nina-home/MotionDirector/outputs/ready/image_animation/train_2023-12-26T14-37-16/checkpoint-300/spatial/lora/ --temporal_path_folder /workspace/nina-home/MotionDirector/outputs/ready/image_animation/train_2023-12-26T13-08-20/checkpoint-300/temporal/lora/ --noise_prior 0.5 --seed 5057764
I get this error:
[2024-03-01 16:38:56,120] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Initializing the conversion map
Traceback (most recent call last):
File "MotionDirector_inference_multi.py", line 287, in <module>
video_frames = inference(
File "/workspace/nina-home/.conda/envs/motiondirector/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "MotionDirector_inference_multi.py", line 174, in inference
pipe = initialize_pipeline(model, device, xformers, sdp, spatial_lora_path, temporal_lora_path, lora_rank,
File "MotionDirector_inference_multi.py", line 65, in initialize_pipeline
unet_lora_params, unet_negation = lora_manager_spatial.add_lora_to_model(
File "/workspace/nina-home/MotionDirector/utils/lora_handler.py", line 214, in add_lora_to_model
params, negation, is_injection_hybrid = self.do_lora_injection(
File "/workspace/nina-home/MotionDirector/utils/lora_handler.py", line 183, in do_lora_injection
params, negation = self.lora_injector(**injector_args) # inject_trainable_lora_extended
File "/workspace/nina-home/MotionDirector/utils/lora.py", line 471, in inject_trainable_lora_extended
loras = torch.load(loras)
File "/workspace/nina-home/.conda/envs/motiondirector/lib/python3.8/site-packages/torch/serialization.py", line 815, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/workspace/nina-home/.conda/envs/motiondirector/lib/python3.8/site-packages/torch/serialization.py", line 1033, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, 'v'.
Help is greatly appreciated:)
The text was updated successfully, but these errors were encountered:
When trying to run inference on my loras and those provided from hf (i.e. after running this command:
I get this error:
Help is greatly appreciated:)
The text was updated successfully, but these errors were encountered: