Deduplication technique could be used in design #2129
Replies: 2 comments 3 replies
-
Data dedup is usually expensive to use, have few value to common use cases. For some cases that have lots of duplicated data, may be we can use hard link, copy_file_range(), snapshot (planned) to avoid the duplicated data, also much faster. |
Beta Was this translation helpful? Give feedback.
-
Feature request : pls have a plugin whereby we can write our own interfacing for dedup etc, change our own hash to blake3 etc
it's expensive not to deploy
actually if zfs is compatible will have built in dedup without this additional part BUT still we shld code our own dedup too. |
Beta Was this translation helpful? Give feedback.
-
What would you like to be added:
Data deduplication is a technique that reuses the repeated data chunk (block in this project) among different files, and it has been used in Ceph project.
There are several file systems use the deduplication to improve read performance and save storage cost. ATC 19', TOS 19'
Why is this needed:
Beta Was this translation helpful? Give feedback.
All reactions