-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
training gpu requirements #14
Comments
About 30-60G depending on the batch size and resolution |
Hello I follow the default config setting, however the train gpu pass 80G, Could you give me some advice? |
As we use A100 to train this model, about 80G is good for us. Please open the gradient checkpointing if the GPU memory footprint is too large. |
Moreover, I remember the training is not larger than 80G, please check the resolution and batchsize of your data |
image_finetune: False output_dir: "outputs" unet_additional_kwargs: motion_module_type: Vanilla pose_guider_kwargs: clip_projector_kwargs: zero_snr: True vae_slicing: True validation_kwargs: train_data:
validation_data: trainable_modules:
unet_checkpoint_path: "outputs/stage1_hamer/checkpoints/checkpoint-final.ckpt"unet_checkpoint_path: "pretrained_models/checkpoint/stage_2_hamer_release.ckpt" lr_scheduler: "constant_with_warmup" max_train_epoch: -1 global_seed: 42 is_debug: False |
This is my stage2_hamer.yaml, GPU out of memory on A100 . I have to change sample_n_frames from 16 to 12. Is it feasible? |
Thanks |
Hello,thanks for your code, I want to know how much GPU memory is needed for training.
The text was updated successfully, but these errors were encountered: