-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error in train.py #27
Comments
Hi @jimpap1, Can you explain how you downloaded and processed the datasets? It's hard to pinpoint why the images failed to load without more context. Thanks, |
@ajaysridhar0 Hello,I met same problems about"Faild to load and TypeError".I download the datasets through the link of README.Then I processed it with data_split.py(I only tried the goStanford datasets),then change the directory in the vint.yaml to run train.py |
@swx153 感谢回复!我还想确认一下关于stanford的那个数据集,是不是只需要data_split.py划分处理。我有搜到pytorch版本问题也会导致这个原因,可以问一下你的torch版本吗?方便的话能不能加一下联系方式,我想对比下是哪一步出了问题。目前我卡在这一步 |
@swx153 十分感谢!抱歉我暂时不参与这个相关项目了,谢谢你的耐心回复! |
Why do i get Aborted (core dumped) when i run train.py ? |
Same with torch 2.4.0, ubuntu 18.04 |
I have solved the problem by downgrading the opencv to 4.1.2.30. I think two main reasons will cause the "Aborted (core dumped)" error. 1: Too small image_log_freq (>0) or too large num_image_log, these settings takes lots of memory which might cause this error. 2: In some machine, the version of PyQT5 mismatches the opencv-python version used in ViNT, this will also cause the error. p.s. sometimes, the torch.multiprocessing can also cause this error. |
Maybe you should delete dataset_xx.lmdb file, and try run the train.py again. I found once the detection cache file is created, it will not be updated.You can try to use gdb to output whether _image_cache has reasonable key-value pairs in vint_dataset.py. Hope it will help you! |
When i run the train.py it fails to load images and i also get TypeErrors.Any thoughts? Did i miss anything?
The text was updated successfully, but these errors were encountered: