-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: waku store sync 2.0 common types & codec #3213
base: master
Are you sure you want to change the base?
Conversation
You can find the image built from this PR at
Built from 03a66f8 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazinggg, thanks so much! 😍
Honestly, I think I should understand more about range based reconciliation in order to follow what's going on here.
Any article or resource you recommend to read and use as a reference? I probably have some homework to do 😬
The goto should be my research issue. waku-org/research#102 and wip spec https://github.com/waku-org/specs/blob/feat--waku-sync-2/standards/core/sync.md Although a lot of details are implementation specific which is not mentioned anywhere 🤔 maybe more code comment are needed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the (significant!) effort, also in the follow-up PRs! 🚀 I've requested some changes, mostly related to helping reviewers understand the code/protocol a bit better. For example, there are some magic numbers that I still can't quite figure out, some codec types don't quite match the spec or need some context, etc.
For this PR, I'd be happy to approve if the code is made clearer (with comments, reduced magic numbers, etc.). However, I think it will be easier for this (and follow-up) PRs to expand the wire protocol specification itself and get an agreed-upon design as spec first so that reviewers wouldn't have to infer how the protocol works only from code.
waku/waku_store_sync/common.nim
Outdated
SyncPayload* = object | ||
ranges*: seq[(Slice[ID], RangeType)] | ||
fingerprints*: seq[Fingerprint] | ||
itemSets*: seq[ItemSet] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we really need a proper wire protocol specification to both understand (and review) what SyncPayload
does, when and how it's exchanged and interpreted, procedural flow, etc. We can then review the implementation as an accurate reflection of the agreed upon design/specification. For example, are Waku Sync nodes simply exchanging SyncPayload
as handshake messages? Is there no negotiation about what shards (or time range) to sync? If not, do we assume that this Sync protocol will eventually be coupled with a negotiation protocol that will first establish the set of shards and time range (and later content topics) the nodes will sync? If not, I can imagine two Sync nodes storing completely different shards routinely happily reconciling their 100% divergent "sets", even if the two sets have nothing to do with each other. metadata
protocol can also not be relied upon, as (i) this will create a strong dependency between the two protocols (ii) Sync nodes store sets may not match all shards they subscribe to/advertise in metadata
protocol.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At this stage, the impl. is 100% equivalent to the previous one in features. There's no negotiation. We can start discussing limited shard support in my research issue if you want.
waku/waku_store_sync/codec.nim
Outdated
|
||
return idx | ||
|
||
proc deltaDecode*(T: type SyncPayload, buffer: seq[byte]): T = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can reference the spec here in the comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it's a good idea to add links in the code. Link always end up broken anyway.
Maybe a section at the top explaining that there's a spec?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for it! A few comments so far :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for it! Approving to not block it although I added a few nitpick comments that I hope you find interesting
Thanks for the patience!
buf = Leb128Buf[uint64]() | ||
|
||
for id in itemSet.elements: | ||
let timeDiff = uint64(id.time) - uint64(lastTime) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we need to convert the timestamps in seconds, insteat of nanos, before computing the diff? I read somewhere https://logperiodic.com/rbsr.html that seconds should be used
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 we could use seconds instead of nanos. Time diffs would be smaller but hashes would have to be included more often, so IDK if it's worth it.
buf = timeDiff.toBytes(Leb128) | ||
output &= @buf | ||
|
||
if timeDiff == 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this will never happen as we are considering nanos
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can because our timestamp have nano precision but half seconds accuracy.
Description
First PR for the new Waku Store Sync 2.0
Include types and encode/decode of payloads and also tests.
Specification
Research issue
Changes
Followed by #3215
Issue