-
Notifications
You must be signed in to change notification settings - Fork 146
Add content validation for accepted content #376
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
b75fc0f
to
0e361d0
Compare
0e361d0
to
63e9ea8
Compare
311daa3
to
8612661
Compare
acb19e0
to
674e911
Compare
674e911
to
0a1c882
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good but I think we should try to pass a vector of content keys/values to propagate_gossip
as an argument or take a different refactor direction with the idea to be able to propagate all the content with one OFFER message to the interested node.
trin-core/src/portalnet/storage.rs
Outdated
) -> Result<bool, PortalStorageError> { | ||
match self.should_store(key)? { | ||
true => { | ||
let _ = self.store(key, value)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let _ = self.store(key, value)?; | |
self.store(key, value)?; |
trin-core/src/portalnet/overlay.rs
Outdated
.store_if_should(&key, &content_value.to_vec()); | ||
|
||
// Propagate all validated content, whether or not it was stored. | ||
Self::propagate_gossip((key, content_value), kbuckets, request_tx); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand the reason for propagating only one key/content_value at a time, but this prevents us from propagating multiple content items to an interested node with one OFFER message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I definitely agree one offer message with many key/value > many messages with one key/value.
It seems we have 3 goals here, and one of them will need to sacrificed to achieve the other two.
- Only
OFFER
content that's been validated- Dropping this requirement = potentially rebroadcasting invalid / malicious content into the network
- Validate content in a non-blocking manner
- Dropping this requirement = not possible, we need to validate content in a non-blocking manner, since content validation might require dropping back into the same overlay network to fetch another piece of data
- Collect all validated content and offer it in a single message
- Dropping this requirement = increases network traffic / congestion
imo, the last one is the best option to drop. It's possible I'm missing something here. Let me know what you think? Is there an alternative workaround here that can satisfy all three requirements?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get that it's more complex, but I don't understand why it would be impossible, to collect all the validated content into a batch. When all of the content has been found valid or invalid, then it could all be sent all together, right?
FWIW, I think it would be fine to punt this into an issue for performance improvement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is fine to block the thread that is validating the content until all content is validated (we can use different threads in parallel for validating one content item and wait for all threads to finish) and then pass only the valid content items in batch to propagate_gossip
as Jason mentioned.
trin-core/src/portalnet/overlay.rs
Outdated
enrs_and_content | ||
.entry(enr.to_base64()) | ||
.or_default() | ||
.push(content_key_value.clone().0.into()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic becomes obsolete if propagate_gossip
accepts only
one content item argument, because we will never have more than one item per node.
trin-core/src/portalnet/overlay.rs
Outdated
@@ -297,37 +316,44 @@ where | |||
let mut enrs_and_content: HashMap<String, Vec<RawContentKey>> = HashMap::new(); | |||
|
|||
// Filter all nodes from overlay routing table where XOR_distance(content_id, nodeId) < node radius | |||
for content_key_value in content { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason for this iteration and the hashmap enrs_and_content
above is that we want to accomplish two things:
- Have all interested nodes as a key for the hashmap.
- Have all content items a node is interested in as hashmap values.
This allows us to send one OFFER message per node with multiple content items instead of multiple OFFER messages per node.
c5a22e3
to
07402ea
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. I like the new comments and the JoinHandle approach 👍
trin-core/src/portalnet/overlay.rs
Outdated
content: Vec<(TContentKey, ByteList)>, | ||
kbuckets: Arc<RwLock<KBucketsTable<NodeId, Node>>>, | ||
command_tx: UnboundedSender<OverlayCommand<TContentKey>>, | ||
) -> anyhow::Result<()> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we don't return errors here anymore, so we can remove this Result?
07402ea
to
5b3bae1
Compare
It's possible that you'll get more information about what is panicking if you cherry-pick this change: I suspect there's missing information that's making it difficult to debug the crash. |
2cb9eb0
to
045ef11
Compare
You should be able to remove the increase poolsize commit now, after rebase. |
045ef11
to
5b3bae1
Compare
5b3bae1
to
4fda56f
Compare
What was wrong?
Content validation needed for accepted content.
Fixes #340
How was it fixed?
overlay_protocol.store_overlay_content()
->storage.store_if_should()
self
ref forpropagate_gossip()
so that it can be used inside tokio taskTo-Do