Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError in maisi_train_controlnet_tutorial.ipynb #1838

Closed
KumoLiu opened this issue Sep 23, 2024 · 4 comments · Fixed by #1839
Closed

ValueError in maisi_train_controlnet_tutorial.ipynb #1838

KumoLiu opened this issue Sep 23, 2024 · 4 comments · Fixed by #1839

Comments

@KumoLiu
Copy link
Contributor

KumoLiu commented Sep 23, 2024

INFO:notebook:Inference...
2024-09-23 06:17:47,215 - INFO - 'dst' model updated: 158 of 206 variables.

INFO:maisi.controlnet.infer:Number of GPUs: 2
INFO:maisi.controlnet.infer:World_size: 1
WARNING:py.warnings:unclosed file <_io.TextIOWrapper name='./temp_work_dir_controlnet_train_demo/environment_maisi_controlnet_train.json' mode='r' encoding='UTF-8'>

WARNING:py.warnings:unclosed file <_io.TextIOWrapper name='./temp_work_dir_controlnet_train_demo/config_maisi.json' mode='r' encoding='UTF-8'>

WARNING:py.warnings:unclosed file <_io.TextIOWrapper name='./temp_work_dir_controlnet_train_demo/config_maisi_controlnet_train.json' mode='r' encoding='UTF-8'>

INFO:maisi.controlnet.infer:trained autoencoder model is not loaded.
INFO:maisi.controlnet.infer:trained diffusion model is not loaded.
INFO:maisi.controlnet.infer:set scale_factor -> 1.0.
INFO:maisi.controlnet.infer:trained controlnet is not loaded.
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/workspace/Code/tutorials/generation/maisi/scripts/infer_controlnet.py", line 207, in <module>
    main()
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/workspace/Code/tutorials/generation/maisi/scripts/infer_controlnet.py", line 159, in main
    check_input(None, None, None, output_size, out_spacing, None)
  File "/workspace/Code/tutorials/generation/maisi/scripts/sample.py", line 378, in check_input
    raise ValueError(
ValueError: The output_size[0] have to be chosen from [256, 384, 512], and output_size[2] have to be chosen from [128, 256, 384, 512, 640, 768], yet got (128, 128, 128).
E0923 06:17:50.348000 140369402987136 torch/distributed/elastic/multiprocessing/api.py:863] failed (exitcode: 1) local_rank: 0 (pid: 209184) of binary: [/usr/bin/python](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f74657374227d-0040ssh-002dremote-002b7b22686f73744e616d65223a2231302e31392e3138332e313930222c2275736572223a2279756e6c6975227d.vscode-resource.vscode-cdn.net/usr/bin/python)
Traceback (most recent call last):
  File "/usr/local/bin/torchrun", line 33, in <module>
    sys.exit(load_entry_point('torch', 'console_scripts', 'torchrun')())
  File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 919, in main
    run(args)
  File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 910, in run
    elastic_launch(
  File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 138, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
scripts.infer_controlnet FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2024-09-23_06:17:50
  host      : yunliu-MS-7D31
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 209184)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
@KumoLiu
Copy link
Contributor Author

KumoLiu commented Sep 23, 2024

Hi @guopengf, could you please take a look at this issue? Thanks.

KumoLiu added a commit to KumoLiu/tutorials that referenced this issue Sep 23, 2024
Signed-off-by: YunLiu <[email protected]>
@KumoLiu
Copy link
Contributor Author

KumoLiu commented Sep 23, 2024

INFO:creating training data:Using device cuda:0 
WARNING:py.warnings:You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. 
 
**ERROR:creating training data:The trained_autoencoder_path does not exist!** 
INFO:creating training data:filenames_raw: ['tr_image_001.nii.gz', 'tr_image_002.nii.gz'] 
[rank0]:[W923 05:56:48.650222570 ProcessGroupNCCL.cpp:1207] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present,  but this warning has only been added since PyTorch 2.4 (function operator()) 
WARNING:py.warnings:unclosed file <_io.TextIOWrapper name='./temp_work_dir/./embeddings/tr_image_001_emb.nii.gz.json' mode='r' encoding='UTF-8'>

WARNING:py.warnings:unclosed file <_io.TextIOWrapper name='./temp_work_dir/./embeddings/tr_image_002_emb.nii.gz.json' mode='r' encoding='UTF-8'>
[rank0]:[W923 10:10:11.778510110 ProcessGroupNCCL.cpp:1207] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present,  but this warning has only been added since PyTorch 2.4 (function operator())

In maisi_diff_unet_training_tutorial.ipynb

@guopengf
Copy link
Contributor

Hi @KumoLiu, I think this error occurred after this PR #1825. We added an input check function for the controlnet inference script. We can change the toy data in the controlnet tutorial to [256, 256, 128] with spacing [1.5, 1.5, 1.5] to pass this input check.

@guopengf
Copy link
Contributor

INFO:creating training data:Using device cuda:0 
WARNING:py.warnings:You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. 
 
**ERROR:creating training data:The trained_autoencoder_path does not exist!** 
INFO:creating training data:filenames_raw: ['tr_image_001.nii.gz', 'tr_image_002.nii.gz'] 
[rank0]:[W923 05:56:48.650222570 ProcessGroupNCCL.cpp:1207] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present,  but this warning has only been added since PyTorch 2.4 (function operator()) 
WARNING:py.warnings:unclosed file <_io.TextIOWrapper name='./temp_work_dir/./embeddings/tr_image_001_emb.nii.gz.json' mode='r' encoding='UTF-8'>

WARNING:py.warnings:unclosed file <_io.TextIOWrapper name='./temp_work_dir/./embeddings/tr_image_002_emb.nii.gz.json' mode='r' encoding='UTF-8'>
[rank0]:[W923 10:10:11.778510110 ProcessGroupNCCL.cpp:1207] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present,  but this warning has only been added since PyTorch 2.4 (function operator())

In maisi_diff_unet_training_tutorial.ipynb

@dongyang0122 Would you like to look into this error for diffusion unet?

KumoLiu added a commit to KumoLiu/tutorials that referenced this issue Sep 24, 2024
Signed-off-by: YunLiu <[email protected]>
KumoLiu added a commit that referenced this issue Sep 25, 2024
Fixes #1838 

### Description
- Upload to avoid torch.cuda deprecated error from GradScaler and
autocast.
- Update to remove file unclose error.
- Update dim to avoid value error in controlnet tutorial.
- Add multi-gpu check to avoid dist warning.

### Checks
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [x] Avoid including large-size files in the PR.
- [x] Clean up long text outputs from code cells in the notebook.
- [x] For security purposes, please check the contents and remove any
sensitive info such as user names and private key.
- [x] Ensure (1) hyperlinks and markdown anchors are working (2) use
relative paths for tutorial repo files (3) put figure and graphs in the
`./figure` folder
- [ ] Notebook runs automatically `./runner.sh -t <path to .ipynb file>`

---------

Signed-off-by: YunLiu <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Dong Yang <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants