Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add a hive test file for 65 headers to test if offer works #27

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

KolbyML
Copy link
Member

@KolbyML KolbyML commented Sep 25, 2024

This test data is for two hive tests I am working on if we can send 64 headers, and if it fails on 65

@KolbyML KolbyML requested review from carver and morph-dev September 25, 2024 18:24
@KolbyML KolbyML marked this pull request as draft September 25, 2024 18:24
@KolbyML KolbyML marked this pull request as ready for review September 25, 2024 18:24
@KolbyML KolbyML marked this pull request as draft September 25, 2024 18:28
@KolbyML KolbyML marked this pull request as ready for review September 25, 2024 18:31
Copy link

@carver carver left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rubber stamp after a very quick scan.

Let me know if there's something specific you want me to double-check.

Copy link
Collaborator

@kdeme kdeme left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just wanted to note that you would not be able to test the actual failure of sending over 65 headers?

It would stop at the offer/accept part and the data would not actually be send?

You would still be able to test the transfer of the 64 items, which is something we also do not test currently.

@KolbyML
Copy link
Member Author

KolbyML commented Sep 26, 2024

Just wanted to note that you would not be able to test the actual failure of sending over 65 headers?

It would stop at the offer/accept part and the data would not actually be send?

I expect something to fail if I try to send 65 headers. This would be the failing case

You would still be able to test the transfer of the 64 items, which is something we also do not test currently.

This should succeed

Copy link
Contributor

@morph-dev morph-dev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM :shipit:

@kdeme
Copy link
Collaborator

kdeme commented Sep 26, 2024

I expect something to fail if I try to send 65 headers. This would be the failing case

Yes, I know this is the failing case. What I am trying to explain is that it will fail already at the accept message, that is if you make the content key list also 65 in size (which would fail deserializing).

Of course you can make it send 64 content keys, and still send 65 content items over uTP. I'm also not sure how exactly you will test those cases over the network.

@carver
Copy link

carver commented Sep 26, 2024

Yes, I know this is the failing case. What I am trying to explain is that it will fail already at the accept message, that is if you make the content key list also 65 in size (which would fail deserializing).

So I guess we're trying to get at: which kind of unexpected behavior are we trying to catch? What we want is for the client to refuse to offer the content when too many keys are provided, and return an error.

One could imagine:

  1. client panics/crashes when receiving 65 keys over json-rpc
  2. client has some bug that allows it to send too many content keys in the offer, and maybe it successfully transfers them (seems outlandish, though)
  3. client offers too many keys, but the remote peer correctly rejects/ignores the offer

We can detect 1 or 2. We probably can't tell the difference between 3 and expected behavior from hive, since that would also just return an error.

@kdeme
Copy link
Collaborator

kdeme commented Sep 27, 2024

We can detect 1 or 2. We probably can't tell the difference between 3 and expected behavior from hive, since that would also just return an error.

Yes, this is what I was trying to clarify. You will unlikely end up testing case 3 (only if there are bugs allowing you to pass this message on the network), and from what I understood from the original issue in Trin, this was exactly the problem.

Hence, these additions (the adapted json rpc call + these test vectors) are thus useful for testing the 64 content items transfer and for testing the actual new json rpc call functionality, but not really for the original issue. This would be typically easier to test within the client test themselves.

@carver
Copy link

carver commented Sep 28, 2024

trin had two issues: A) you could offer unlimited items and B) it only accepted up to 8.

I agree this particular test doesn't catch issue A, and it's probably unreasonable to ask Hive to find it (we are writing internal client tests for it).

And also... the hive test for 65 items is quick enough to write. Scenarios 1 and 2 from earlier aren't totally implausible. I like the idea of moving toward including more "intended failure" tests. So with all that, I see at least a slim benefit to adding the test, and don't see any downside.

@ScottyPoi
Copy link

Testing this locally and we get a Discv5 failure trying to send an OFFER message with more than 31 of these keys (1023 bytes) at once. thinking we must be hitting an upper limit on discv5 packet payloads?

@KolbyML
Copy link
Member Author

KolbyML commented Oct 4, 2024

Testing this locally and we get a Discv5 failure trying to send an OFFER message with more than 31 of these keys (1023 bytes) at once. thinking we must be hitting an upper limit on discv5 packet payloads?

So I guess the question now is we have 2 options

  • send the data possibly over uTP
  • limit the amount of keys able to be sent by a size limit in bytes as well

maybe we have another possible choice let me know if I missed one

@ScottyPoi
Copy link

Testing this locally and we get a Discv5 failure trying to send an OFFER message with more than 31 of these keys (1023 bytes) at once. thinking we must be hitting an upper limit on discv5 packet payloads?

So I guess the question now is we have 2 options

* send the data possibly over uTP

* limit the amount of keys able to be sent by a size limit in bytes as well

maybe we have another possible choice let me know if I missed one

not sure what you mean by the first one. setting an actual size limit probably does make more sense than "number of keys"

@KolbyML
Copy link
Member Author

KolbyML commented Oct 4, 2024

Testing this locally and we get a Discv5 failure trying to send an OFFER message with more than 31 of these keys (1023 bytes) at once. thinking we must be hitting an upper limit on discv5 packet payloads?

So I guess the question now is we have 2 options

* send the data possibly over uTP

* limit the amount of keys able to be sent by a size limit in bytes as well

maybe we have another possible choice let me know if I missed one

not sure what you mean by the first one. setting an actual size limit probably does make more sense than "number of keys"

The content_key list in the offer message is sent over uTP if the content_key list is above 1065ish bytes

@kdeme
Copy link
Collaborator

kdeme commented Oct 4, 2024

The content_key list in the offer message is sent over uTP if the content_key list is above 1065ish bytes

This however requires re-engineering the way the offer/accept works. Not really feasible short-term and not really needed either?

So as you already suggest, I'd go for the additional payload limit that needs to be set as the size of content keys are content/subnetwork specific. (This is sort of implied/enforced currently due to discv5 packet limits)

Perhaps we want to also lower the current content keys list limit to a value that is more sane (I'm assuming here that the current value is not sane, but perhaps it is for some content keys)?

@ScottyPoi
Copy link

ScottyPoi commented Oct 4, 2024

This however requires re-engineering the way the offer/accept works. Not really feasible short-term and not really needed either?

Perhaps we want to also lower the current content keys list limit to a value that is more sane (I'm assuming here that the current value is not sane, but perhaps it is for some content keys)?

agreed. to use uTP to send the OFFER message itself is overkill. we should just cap the list of keys at 1024 bytes.

we used to have a max OFFER of 26 keys in ultralight. I thought that number came from the spec, but i can't trace it anywhere.

@kdeme
Copy link
Collaborator

kdeme commented Oct 16, 2024

There is an existing issue regarding this by the way: ethereum/portal-network-specs#160

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants