-
Notifications
You must be signed in to change notification settings - Fork 20.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A local Overflow Pool to cache transactions during high traffic #30567
base: master
Are you sure you want to change the base?
Conversation
This doesn't make sense to me. If the pool is full, then arguably it's because we've reached the constraints that are set upon it: the amount of memory we were willing to dedicate to transactions is filled up. You're saying that we should add a second standalone pool, 256MB of memory? Why not just add more memory to the first pool? OR, maybe you're saying that during high traffic, our pool is not fast enough to keep up the pace, and transactions are dropped which would otherwise be accepted? And that this would be solved with adding a secondary faster pool? If this is the case, then I think you are wrong, a slow pool should act as backpressure. We have a number of goroutines that fetch transactions from peers, and if the pool is slow, then they simply have to wait longer to deliver their loads. |
I think the PR is actually a stab at your local pool that piles up local transactions and drips them into the live pool. @emailtovamos If your goal is to accumulate local transactions above the pool limits, that's something @holiman has a PR about and we're actively want to address. If your goal is to overflow network transactions, that doesn't make sense, they should just be handled by the pool. |
I think not -- it doesn't differentiate between |
256MB of extra memory or more can be set by the node by setting the limit of the Overflow Pool. So that nodes with less hardware capacity don't have to support it. Whereas nodes with high capacity can afford to increase this memory. It is configurable. e.g. currently if a user sends a transaction and the pool is full, the transaction will likely fail (assuming gas price wasn't high enough) and he will probably send the transaction again after some time. In the above concept, he will simply send it once and wait for maybe the same time eventually but he won't have to do it twice. |
This idea is mainly to avoid transaction lost when the current TxPool is overflowed, as transaction lost could bring very bad UX, which is not uncommon during network traffic burst(like inscriptions). But simply increase the size of current TxPool has some side effects, mainly because it will take more resource to handle these transactions: CPU & Memory & Network. Currently, BSC set TxPool's size to around 10K to 15K. This new overflow pool wanna to be simple and take very limited resource to cache huge number of transactions during burst traffic, OverflowPool is expected to cache more than 100K transaction when current TxPool is full |
Description
Details
a. Main Pool
There is no new structure for Main Pool, actually it is just the current Transaction Pool (Legacy Pool). When this Pool isn't full, the received new transactions will be broadcasted to all the relevant peers, just the same as the current behaviour.
b. Overflow Pool: the local buffer (LRU or Heap)
When the Main Pool is overflowed during high traffic, then it would put lots of stress on the network if we keep broadcasting new transactions. So we put any new transaction into the Overflow Pool and don’t broadcast or announce.
The size of the Overflow Pool could be very large, in order to hold a large traffic volume, like 500K. Suppose the average transaction size is 500B, it will take around 256MB memory, which is acceptable.
How to flush transactions from Overflow Pool to Main Pool: