Replies: 3 comments 3 replies
-
this argument makes a lot of sense to me, especially given the current design. State and bandwidth don't have the same costs, therefore each has different ideal fee markets. do you think this changes once we get the point where some form of internal sharding is used? I guess not? |
Beta Was this translation helpful? Give feedback.
-
I also don't see how the resources are meaningfully burstable in practice, assuming that in practice most validators run on dedicated hardware with a dedicated network connection and CPU resources etc. |
Beta Was this translation helpful? Give feedback.
-
This is an excellent writeup @cmwaters! I agree with your points. Specifically, I agree that resources don't really seem burstable, at least not in practice. What seems to fluctuate in practice is supply and demand of block space. So I think there's two somewhat orthogonal conversations to be had. One, how to price resources independently, e.g. storage/state rent vs computation (bandwidth is tricky to price)? Secondly, how to effectively price inclusion of txs into a block, such as the secondary price auction you mentioned. |
Beta Was this translation helpful? Give feedback.
-
EIP1559 based fee mechanisms are based on the assumption that there is elasticity to the underlying hardware that nodes run. Put another way: the model assumes that machines have burst capacity and sustained capacity and that these are not the same thus bursts are curbed by making it increasingly expensive to sustain that burst. Multidimensional EIP 1559, as I understand it, extends this model by saying that there are categories of resource usage whereby the burst capacity and sustained capacity differ. Hence burst usage that targets a particular resource should get more expensive at different rates to other resources. For example, nodes may not be able to sustain bursts of bandwidth usage in relation to how they can sustain bursts of high computational load.
Trying to put on a critical hat here, my question is, is that really the case? Is burst capacity not equal to sustained capacity? As an example that ignores multidimensionality for just a moment: If we put aside gossiping and other software side inefficiencies (mutex contention for example), would Celestia not be able to handle sustained 2MB blocks (or whatever capacity we deem as the upper bound of the system). Is there something about handling the previous block that carries over to the next. Do the cores heat up and are unable to achieve the same clock speed when verifying signatures of the next block? Can we not expect the machines to run tirelessly under the same load?
If we aren't trying to create a fee market that compensates for the protocol being unable to sustain a high load, then if I'm not mistaken, what we're left with is a price setting mechanism that is trying to find a price that if paid gives the highest guarantee possible that it will be included in a block. In that case, I feel like I would rather wallets try to predict that price then some on-chain protocol which could easily get it wrong (i.e. set a price too high or too low). I think what would help with improving those guarantees were a mechanism I described in my original write up where users set the maximum they are willing to pay but everyone pays the same as the lowest paying transaction included in the block. This essentially works like a second price auction, however it's the lowest price that makes it into the block.
I open the floor for debate
Beta Was this translation helpful? Give feedback.
All reactions