-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add a hive test file for 65 headers to test if offer works #27
base: master
Are you sure you want to change the base?
Conversation
3ada846
to
34e76b6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rubber stamp after a very quick scan.
Let me know if there's something specific you want me to double-check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wanted to note that you would not be able to test the actual failure of sending over 65 headers?
It would stop at the offer/accept part and the data would not actually be send?
You would still be able to test the transfer of the 64 items, which is something we also do not test currently.
I expect something to fail if I try to send 65 headers. This would be the failing case
This should succeed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Yes, I know this is the failing case. What I am trying to explain is that it will fail already at the accept message, that is if you make the content key list also 65 in size (which would fail deserializing). Of course you can make it send 64 content keys, and still send 65 content items over uTP. I'm also not sure how exactly you will test those cases over the network. |
So I guess we're trying to get at: which kind of unexpected behavior are we trying to catch? What we want is for the client to refuse to offer the content when too many keys are provided, and return an error. One could imagine:
We can detect 1 or 2. We probably can't tell the difference between 3 and expected behavior from hive, since that would also just return an error. |
Yes, this is what I was trying to clarify. You will unlikely end up testing case 3 (only if there are bugs allowing you to pass this message on the network), and from what I understood from the original issue in Trin, this was exactly the problem. Hence, these additions (the adapted json rpc call + these test vectors) are thus useful for testing the 64 content items transfer and for testing the actual new json rpc call functionality, but not really for the original issue. This would be typically easier to test within the client test themselves. |
trin had two issues: A) you could offer unlimited items and B) it only accepted up to 8. I agree this particular test doesn't catch issue A, and it's probably unreasonable to ask Hive to find it (we are writing internal client tests for it). And also... the hive test for 65 items is quick enough to write. Scenarios 1 and 2 from earlier aren't totally implausible. I like the idea of moving toward including more "intended failure" tests. So with all that, I see at least a slim benefit to adding the test, and don't see any downside. |
Testing this locally and we get a |
So I guess the question now is we have 2 options
maybe we have another possible choice let me know if I missed one |
not sure what you mean by the first one. setting an actual size limit probably does make more sense than "number of keys" |
The content_key list in the offer message is sent over uTP if the content_key list is above 1065ish bytes |
This however requires re-engineering the way the offer/accept works. Not really feasible short-term and not really needed either? So as you already suggest, I'd go for the additional payload limit that needs to be set as the size of content keys are content/subnetwork specific. (This is sort of implied/enforced currently due to discv5 packet limits) Perhaps we want to also lower the current content keys list limit to a value that is more sane (I'm assuming here that the current value is not sane, but perhaps it is for some content keys)? |
agreed. to use uTP to send the OFFER message itself is overkill. we should just cap the list of keys at 1024 bytes. we used to have a max OFFER of 26 keys in ultralight. I thought that number came from the spec, but i can't trace it anywhere. |
There is an existing issue regarding this by the way: ethereum/portal-network-specs#160 |
89710e4
to
34e76b6
Compare
This test data is for two hive tests I am working on if we can send 64 headers, and if it fails on 65