-
Notifications
You must be signed in to change notification settings - Fork 146
Define Bridge Node Architecture #319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I don't understand the reasoning behind "tiny radius" nodes and I also want to clarify whether these are nodes that the bridge runs itself, are they independent nodes, or are they something fancier like a single piece of software managing multiple nodes concurrently on the network. |
All the Architecture plan:
In the diagram,
The rationale to tune the radius down includes:
|
The general idea looks reasonable to me 👍 .
I'm not sure if it is worth spreading those evenly across the keyspace (especially in a testing network) because in practice it is really hard to generate random NodeIDs which fall in a bucket higher than 16-17ish. |
Totally. "Evenly-spaced" is very loose here. If we want 256 nodes, then we could grind until we have node IDs that start with every byte from 0x00 to 0xFF, and not worry much about the rest of the node ID. (maybe a slight preference for the next byte to be closer to 0x80 than to 0x00 or 0xFF, but probably not worrying about it much) |
closing as our bridge architecture has evolved beyond this issue |
Although we aren't building this now, we're taking small steps toward it, with issues like #314
Here is the architecture that currently makes the most sense to me:

I'm imagining maybe 200 nodes, each with 5MB storage, and only connected to the history network. The Splitter will push data in using the
portal_historyStore
json-rpc endpoint, which will gossip normally, as if it had received the data on-chain. Then it will return back the list of ENRs that accepted the new data. This can help us monitor whether the data is making it outside of our self-hosted nodes and bootnodes.#314 will be a scaled down, dumbed down version of this ^, maybe running only 4-8 nodes and doing the dumb thing of just pushing all data to all nodes. Maybe some data gets totally dropped on the floor, which is fine for alpha-quality. Also, the monitoring of recipient ENRs will probably not be included.
Potential Subtasks
This is a brainstorm of some ideas about how to accomplish the above architecture. This assumes that a number of things will already be taken care of by #314 -- so none of these are imminent
portal_historyGossip
that returns the list of ENRs that the new history was gossiped to (this will make it take longer, as it must block during the gossip phase)portal_historyStore
on, for that piece of content. I'm guessing0 < k < 6
The text was updated successfully, but these errors were encountered: