You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the second stage of the FGT network training, I found that batch_size was only set to 1 and only 5 frames per video were selected for training. Thus, the size of the input tensor is (b, t, c, h, w)=>(1, 5, c, h, w). I would like to know why batch_size is set so small.
The text was updated successfully, but these errors were encountered:
In my experiments, I set batch size to 2, not 1. The batch size is the mini-batch per GPU, therefore, if you adopt 4 GPUs, and the batch size set to 2, the overall batch is 8 (2*4).
If you have more GPU memory, you can select more frames in a video for training, which may lead to better performance. I only selected 5 frames because of the limitation of GPU memory.
Furthermore, I find some duplicate definitions.
In the second stage FGT network training, I find some duplicate definitions betwen configuration file 'train.yaml' and the 'inputs.py' file. Also, when run code in train.py, it load another configuration file 'flowCheckPoint/config.yaml', which also contains some duplicate definitions.
In the second stage of the FGT network training, I found that batch_size was only set to 1 and only 5 frames per video were selected for training. Thus, the size of the input tensor is (b, t, c, h, w)=>(1, 5, c, h, w). I would like to know why batch_size is set so small.
The text was updated successfully, but these errors were encountered: