-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom dataset evaluate OOM #15
Comments
Hi, could you show me your custom dataset config (whole exported config in exp folder is better) and OOM output? |
These are config and error log Config:
-- Process 3 terminated with the following error: |
The config looks good. Is there any potential that the validation point clouds are significantly larger than the training point clouds? What about |
I random split the train and val, so there won't be much difference. |
I am sorry about that issue. I never encounter a similar problem. I notice that the validation batch size per GPU is identical to the train batch size per GPU (both 1). The memory consumption of the evaluation process should be much lower than the training process. For debugging this issue, my suggestion is to print out the input shape before feeding it into the model. |
this is the eval size
|
Hi, that was quite a huge number for a point cloud after voxelization. Maybe you can further validate whether the validation point cloud voxelized successfully. |
When I use this model on custom dataset, it is normal in training phase,but once in evaluation phase, it's always encountering out of memory for GPU.
What's the possible reason for this?
The text was updated successfully, but these errors were encountered: