You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear author, thank you so much for your work.
I'm impressed by your work and try to infer directly with your pretrained models. but things didn't work out for me.
I have followed your instructions in direct_inference and set up the environment. Then I used the example command python direct_inference.py STUNetTrainer_small example/Task032_AMOS22_Task1 example/result just as is written in your instructions. At first I thought this process would be smooth because I saw the terminal printed out
but suddenly an error occured: File "D:\anaconda3\envs\pyTorch39\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) _pickle.PicklingError: Can't pickle <function <lambda> at 0x0000022D9F31155 0>: attribute lookup <lambda> on nnunet.utilities.nd_softmax failed [W CudaIPCTypes.cpp:16] Producer process has been terminated before all sha red CUDA tensors released. See Note [Sharing CUDA tensors]
File "D:\anaconda3\envs\pyTorch39\lib\multiprocessing\spawn.py", line 126 , in _main self = reduction.pickle.load(from_parent) EOFError: Ran out of input
the error message is quite long so I only paste the latest lines here.
I have searched for this error and it is said that
this error typically occurs when the pickle module in Python tries to load an empty file or a file that has been truncated. It means that the end-of-file was reached unexpectedly while there was still data expected to be read. This can happen if the file is empty, corrupted, or if there is a mismatch between how the data was written and how it is being read.
I have checked the RESULT_FOLDER's structure, but it was exactly the same as the structure in your instructions. and your pkl files must be fine. With all the possibilities excluded, now I have no clue why this happens. So I came to ask for your help.
I would appreciate it if you could offer some suggestions.
Thank you very much!
The text was updated successfully, but these errors were encountered:
yy042
changed the title
pickle error occured when running the Direct Inference command
pickle error occurs when running the Direct Inference command
May 9, 2024
Dear author, thank you so much for your work.
I'm impressed by your work and try to infer directly with your pretrained models. but things didn't work out for me.
I have followed your instructions in direct_inference and set up the environment. Then I used the example command
python direct_inference.py STUNetTrainer_small example/Task032_AMOS22_Task1 example/result
just as is written in your instructions. At first I thought this process would be smooth because I saw the terminal printed outbut suddenly an error occured:
File "D:\anaconda3\envs\pyTorch39\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) _pickle.PicklingError: Can't pickle <function <lambda> at 0x0000022D9F31155 0>: attribute lookup <lambda> on nnunet.utilities.nd_softmax failed [W CudaIPCTypes.cpp:16] Producer process has been terminated before all sha red CUDA tensors released. See Note [Sharing CUDA tensors]
File "D:\anaconda3\envs\pyTorch39\lib\multiprocessing\spawn.py", line 126 , in _main self = reduction.pickle.load(from_parent) EOFError: Ran out of input
the error message is quite long so I only paste the latest lines here.
I have searched for this error and it is said that
I have checked the RESULT_FOLDER's structure, but it was exactly the same as the structure in your instructions. and your pkl files must be fine. With all the possibilities excluded, now I have no clue why this happens. So I came to ask for your help.
I would appreciate it if you could offer some suggestions.
Thank you very much!
The text was updated successfully, but these errors were encountered: