-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for 'cloud' friendly API - which one? #21
Comments
If arbitrary object storage backends could be supported, that'd be awesome. Supporting "s3-compatible" semantics seem to be a widely supported least common denominator. |
I'm glad you mentioned semantics because having looked at some Swift docs from that perspective I'm not sure it can quite do the job (and wasn't even able to find a storage provider that used it). One of the legs holding up sparsebak's speed & efficiency is POSIX fs semantics. Which is why I think little old sftp may cut it if the others don't. Interestingly, amazon s3 offers sftp which gives me hope others do as well. What I need at a minimum in addition to Finally, there needs to be some easily accessible API on my client end to allow me to stream files out like sausage links. I may have to make a dependency to a non-core library (or tool like sshfs) in order to do that. |
It appears that the Amazon s3 protocol is open and used by a number of other cloud storage providers, more than sftp, and the API is easy to use from python. So I'm listing it as a candidate for now. The doubt I have about s3 protocol is whether their Amazon themselves have been gradually moving s3 tools in the direction of posix, and now offer actual sftp access, so I'd just assume use the real thing. The irony here would be that I have to listen to ppl whine about no s3 protocol support bc they have to resort to the service provided by Amazon. |
This looks really useful: https://github.com/s3fs-fuse/s3fs-fuse |
Perhaps rclone? |
Interesting, but rather heavy (14MB compressed). FUSE is a much better deal, IMO, and I get the impression that tools like @rclone exist because of Windows usage patterns. OTOH, rclone can mount remote storage as a local fs so "there ya go"... like FUSE you can already use it with Wyng. :) With FUSE and rclone available, the question about protocol support becomes more about whether Wyng will integrate the process of connecting remotely or leave it to the user or a GUI shell to make the connection. |
Honestly, I am not sure if it's worth adding all the complexity that comes with adding such functionality natively. Perhaps you could just link to rclone for that or use its python wrapper? https://github.com/rclone/rclone/tree/master/librclone#python |
I have read the docs in diagonal, and it seems it would resolve my stalled PoC with rsync.net, since buckets seems to be able to be configured as read only with other access keys? |
See issue #101 about sshfs performance. |
You were right to be skeptical. Object stores like S3 are flat key/value stores with no heirarchy, so Generally, object stores expect one to perform large, independent accesses and be able to tolerate substantial per-access latency. Individual objects can be very large, though, allowing high throughput. It is also possible to efficiently operate on multiple objects in parallel, but this requires that one can submit multiple requests before knowing any of the results. Looking at https://github.com/tasket/wyng-backup/blob/5f153e4c155cd4e400ad85e8b1d6fa08a1508300/doc/Wyng_Archive_Format_V3.md, it seems that the current design is very much more suited to a file system than to object storage. For object storage, I would go with something like this:
Here, the {
"version": 1,
"keys": [
{ "start": 1234, "size": 5678, "name": "/volume1/session1_data", "hash": "000000000000000000" }
{ "start": 1234, "size": 5678, "name": "/volume1/session2_data", "hash": "000000000000000000" }
]
} The key differences are:
This is more work on Wyng’s part, but allows Wyng to use cheap, scalable object storage, rather than a file system that is much harder to scale horizontally. |
Interesting. I guess other exists, never tried Microsoft one drive https://github.com/oxalica/orb This creates a usable block device that can be formatted as a mountable btrfs device. |
I'm moving this up to milestone v0.9. |
This will have vastly inferior performance to a real block device, and it poses a major security risk if one considers Onedrive to be untrusted. This is because btrfs considers the block device to be trusted. I strongly recommend implementing native support for object storage in wyng-backup instead. |
@DemiMarie the alternative here, not to dismiss this cloud based storage because that is needed and ssh servers from my past PoC attempts are rare outside of self-hosting and self-managing a VPS, will be to to self host the backup archives, which needs work in parallel. We are talking about QuebesOS user base here and I can already see some push back into hosting private backups, into any cloud provider. The solution to this, which I'm working on in parallel, is to have easy recipe to self host said wyng archives through self-made NAS on top of OpenWRT supported models. I have a working PoC I'm using daily, which fixes needed to make this work were worked on and fixed already, but traces of the discussions are under #195 |
If using existing Wyng storage model verbatim, storing individual chunks as individual units of data - that would probably suggest larger Wyng chunk sizes (1-16M chunks?) - to decrease the overhead of API calls per chunk. As deduplication efficiency will go down with increased chunk size, wondering whether resulting dedup efficiency will still be acceptable. For example, Duplicacy uses variable-length chunks in that range, they target 4M chunks on average.
In case of S3 pruning will be somewhat more complicated than with filesystem-based storage:
On the other hand, for those cloud storage options that support garbage collection / reference counting for blobs - Wyng could offload much deduplication complexity to them. P.S.:
Google returns interesting performance differences between sftp vs. sshfs vs. rclone sftp mount vs. rsync over ssh - may be worthwhile to benchmark. |
FWIW, the current max chunk size in Wyng is 2MB. Big chunks aren't good for deduplication, though. I've thought about the content-only addressing angle for some time (Wyng V3 format is a hybrid of offset and content addressing). Probably the most effective way to reclaim space from unused chunks, without scanning the whole archive directory on every The problem with keeping a separate chunk map of any kind is you now have the logistical problem of cache coherency (Wyng already has a 1-layer cache coherency challenge, adding another persistent layer is something to avoid if possible). |
I think it would be best to look at cloud pricing to see what the cost per request is compared to the cost of metadata access.
I suggest doing benchmarks to determine the relative costs of different operations. |
I talked with @Laikulo and they suggested that a local index be used to reduce the number of requests that must be made to object storage. |
Although
ssh
with Linux shell is currently supported, this is not commonly offered by large 'cloud' storage services.Some protocols that have been already suggested:
sftp
amazon S3
swiftwebDAV
...or using FUSE to access one of the above or other storage type such as @cryfs .
The text was updated successfully, but these errors were encountered: