-
Notifications
You must be signed in to change notification settings - Fork 274
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add transfer configuration to support concurrent downloading #841
Comments
I don't believe that requesting parallel streams for a single file will increase bandwidth - you should simply saturate your network bandwidth once the transfer begins. Concurrency helps most when making many requests and wishing to amortise the latency. (decoding and compression may parallelise well, but they are not normally significant, and python's GIL makes it hard to achieve) Having said that, s3transfer does do some clever things, so if anyone is interested in calling it within get/download[_file], I would be interested to see this. |
I noticed a significant speed up when multipart download was enabled. I tested it with a s3 file which is 2GB large. With multipart download: ~10s import boto3
from boto3.s3.transfer import TransferConfig
MB = 1024 ** 2
multipart_threshold = 5 * MB # --> ~10s
# multipart_threshold = 5000 * MB # --> ~2min
config = TransferConfig(multipart_threshold=multipart_threshold)
s3 = boto3.client("s3")
s3.download_file("s3-bucket", "s3-path", "filename", Config=config) This behavior might also be related to #900. Background: I use |
After implementing concurrency in uploads, I don't think it would be much work to do the same in get_file and cat_file, with the same semantics. If anyone would like to try? Obviously, the threshold to go concurrent, the block size and the max number of connections would be all important parameters, as well as whether the download is happening in AWS versus some local wifi. Note, however, that downloading multiple files is already concurrent, so it may be that not too many people are suffering any slowdown. |
I am trying to speed up downloading of large files from S3.
I can enable parallel downloading of file parts in boto3 using https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3.html:
Can I do the same in s3fs? if seems that passing
s3_additional_kwargs
doesn't help:It would be great to have this feature in s3fs.
The text was updated successfully, but these errors were encountered: