You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to follow Step 5 of "Initial Installation", and ran into an issue with the command humann_databases --download chocophlan full <path>. Namely, after ~15m of downloading the download failed with the following error: CRITICAL ERROR: Unable to download and extract from URL: http://huttenhower.sph.harvard.edu/humann_data/chocophlan/full_chocophlan.v201901_v31.tar.gz. I tried this command 4 times, with each attempt resulting in an error. It appears other users have had similar issues downloading from the default file-hosting URLs: https://forum.biobakery.org/t/difficulty-downloading-databases-in-humann3/1343 Considering the size of the dataset, and thus the time it takes to download, trying again from scratch seems inefficient.
If possible, it would be good to enable resumption functionality similar to wget -c, where the download utility can pick up from where it was interrupted.
The text was updated successfully, but these errors were encountered:
Thank you for creating this issue.
We currently field issues through our bioBakery Discourse Support Forum.
If you would please post the issue to discourse we would be happy to sync up with you to get it resolved.
I tried to follow Step 5 of "Initial Installation", and ran into an issue with the command
humann_databases --download chocophlan full <path>
. Namely, after ~15m of downloading the download failed with the following error:CRITICAL ERROR: Unable to download and extract from URL: http://huttenhower.sph.harvard.edu/humann_data/chocophlan/full_chocophlan.v201901_v31.tar.gz
. I tried this command 4 times, with each attempt resulting in an error. It appears other users have had similar issues downloading from the default file-hosting URLs: https://forum.biobakery.org/t/difficulty-downloading-databases-in-humann3/1343 Considering the size of the dataset, and thus the time it takes to download, trying again from scratch seems inefficient.If possible, it would be good to enable resumption functionality similar to
wget -c
, where the download utility can pick up from where it was interrupted.The text was updated successfully, but these errors were encountered: