You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, Thanks for your awesome code.
I try to run your provided code, but an error occur when I run the command bash scripts/download_process_adt.bash
I find this error occurs as the adt download system updated, when I refer to 'adt_benchmark_dataset_downloader.py'
if __name__ == "__main__":
print(
"[ERROR]: adt_benchmark_dataset_downloader is deprecated. Please use aria_dataset_downloader instead. \n"
"To learn more, refer to ADT Documentation: https://facebookresearch.github.io/projectaria_tools/docs/open_datasets/aria_digital_twin_dataset/dataset_download) "
"\nThis tool will be removed in 2025."
)
Follow this instruction, I changed the code in scripts/download_process_adt.bash
for SCENE_NAME in "${SCENE_NAMES[@]}"; do
aria_dataset_downloader \
-c ${ADT_DATA_ROOT}/ADT_download_urls.json \
-o ${ADT_DATA_ROOT}/ \
-d 0 1 5 6 \
-l ${SCENE_NAME}
done
Through change the scene name, e.g., "Apartment_release_multiskeleton_party_seq121_71292"
"Apartment_release_multiskeleton_party_seq121_M1292"
"Apartment_release_multiskeleton_party_seq122_71292"
"Apartment_release_multiskeleton_party_seq122_M1292"
I found the dowanload is smooth, but the further process code shows some errors:
T```
here are 0/4349 valid frames in the output.
FutureWarning: You are using torch.load with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
Skipping tagging and detection models.
Traceback (most recent call last):
File "/disk/astar/stua/egolifter/scripts/generate_gsa_results.py", line 726, in
main(args)
File "/home/stua/anaconda3/envs/egolifter/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/disk/astar/stua/egolifter/scripts/generate_gsa_results.py", line 488, in main
raise ValueError("Unknown dataset type. ")
ValueError: Unknown dataset type.
The text was updated successfully, but these errors were encountered:
Hi, Thanks for your awesome code.
I try to run your provided code, but an error occur when I run the command
bash scripts/download_process_adt.bash
I find this error occurs as the adt download system updated, when I refer to 'adt_benchmark_dataset_downloader.py'
Follow this instruction, I changed the code in scripts/download_process_adt.bash
Through change the scene name, e.g.,
"Apartment_release_multiskeleton_party_seq121_71292"
"Apartment_release_multiskeleton_party_seq121_M1292"
"Apartment_release_multiskeleton_party_seq122_71292"
"Apartment_release_multiskeleton_party_seq122_M1292"
I found the dowanload is smooth, but the further process code shows some errors:
T```
here are 0/4349 valid frames in the output.
FutureWarning: You are using
torch.load
with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
Skipping tagging and detection models.
Traceback (most recent call last):
File "/disk/astar/stua/egolifter/scripts/generate_gsa_results.py", line 726, in
main(args)
File "/home/stua/anaconda3/envs/egolifter/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/disk/astar/stua/egolifter/scripts/generate_gsa_results.py", line 488, in main
raise ValueError("Unknown dataset type. ")
ValueError: Unknown dataset type.
The text was updated successfully, but these errors were encountered: