Replies: 16 comments
-
From todays meeting, here are some of the Test contracts from the test app. It would be great if these could returned some fee estimate data on testnet Test Contract call In Prod we test with the following coins: When needed we also puchase NFTs in prod but wit the price isn't something we do frequently unless an issue calls for it. |
Beta Was this translation helpful? Give feedback.
-
The list above is quite right and with that QA does another round of extensive testing with the use case(s) below: Testnet: Contract deployment which we test via Testnet demo app https://stacks-wallet-web.vercel.app/ Mainnet Integrated app(s) |
Beta Was this translation helpful? Give feedback.
-
This is a graph of data from a stacks-node over the last two days. It shows the median fee rate (measured as ustx per 1/60kth of the block limit) in each Stacks block in blue, and the corresponding "middle" estimate in the node in purple. Looking at this data, it seems like there's a couple of things happening. First, there's massive outlier blocks -- you can see three or four peaks in that graph, two of which approach 1m ustx per 1/60kth of the block limit. Such a fee rate would lead to normal contract-calls costing 50s to 100s of STX in fees. For STX transfers, it implies a cost of just 1-3 STX. These outlier blocks swing the estimate dramatically, and the estimate remains high for a long period of time after the outlier. Second, these outlier blocks appear to be mostly caused by blocks that have exclusively STX transfers (which tend to be large fee rate outliers): 0x94cd4829829efe814c0555681f0e6f9911b612609f88e04d8e74bb3c0f4fc I'm not 100% sure what this all tells us about solving this problem, but I think there's a couple noteworthy things:
It seems like solving (1) would be the most readily achievable -- altering the update function from a simple exponential windowing to something that's much "stickier" like median of a window could solve this issue. Solving (2) and (3) also seems readily achievable, but wouldn't eliminate the possibility of outliers entirely. Solving (4) seems like the trickiest of the bunch: the current implementation assumes a lot about using a mined block as input rather than the mempool, and beyond that, the structure of our mempool makes deciding what's "in" or "out" of the mempool tricky. |
Beta Was this translation helpful? Give feedback.
-
When blocks are not full, a fee of 0 for the percentage of the block which was left unused should be factored into the estimation. |
Beta Was this translation helpful? Give feedback.
-
After looking at some of the data more, and talking through possible approaches with @jcnelson, I think the best approach is something like the following:
The above approach gives us a relatively straight-forward way to fix the issues we see with the estimator now, but other approaches may be necessary long term: ultimately, fee estimation should use information gleamed from the mempool. We can test how well the above approach works with the existing block data for solving problems 1-3, but we will need to do more testing for problem 4-- we can implement an integration test that uses the fee rate estimator and tests behavior during congestion, but we should also perform some explicit tests on testnet as well. |
Beta Was this translation helpful? Give feedback.
-
Okay -- some data from some initial experimentation: This graph plots the "middle" estimate for each block from 39,485 to 39,685, calculated with three different methods. The blue line computes the middle the same way the current stacks-node does: simple median of the block's fee rates. The yellow line computes the middle using the weighted median. The red line uses a weighted median and simulates filling the block with rate=1 transactions. You can see that the weighted median lowers the estimates (which is expected: weighted medians will down-weight stacks transfers, which typically have the highest fee rate), but it does not eliminate the outlier blocks. That is because the big outlier blocks are usually very nearly empty -- their fee rate is determined by one or two transactions, which won't change when the fee rates are just weighted. Adding block fill eliminates these outliers and reduces the estimate a lot in empty-ish blocks. The next set of experiments was intended to test the fee rate estimates as the blocks are processed. These graphs show two different tests: first, using the weighted medians and block fill to compute each block's stats and, second, using the unweighted median (i.e., current implementation). In each graph, two methods for estimating over a window are used: the current exponential windowing method and then the median of a 5-block window. You can see that median windowing does indeed deal with outliers much more smoothly. However, with the better processed block metrics, it isn't quite as sharp of a difference (because there are fewer outliers). If you're interested in playing with these methods and data, the scripts I used for generating the underlying data is available in kantai/stacks-block-fee-metrics |
Beta Was this translation helpful? Give feedback.
-
Suggesting to default to the network minimum fee for the "low" option instead of an arbitrary max. I am adding a copy to this here because it has already gotten some thumbs up here: leather-io/extension#2086 |
Beta Was this translation helpful? Give feedback.
-
My question is (and pardon my late entrance): how can we rely on historical rates if we have never had accurate estimation? We are pushing things through at 0.75 STX due to current wallet defaults, but in my discussion with Clarity devs and other members of the Stacks community, it seems that many transactions go through (anecdotally) for .005 or .001. Shouldn't we at least use this as the minimum/baseline? Also, if the network goes through a period of instability (ie Megapont Robot drop), then the historicals will be high even though the network has since recovered. Wouldn't we be better off looking at mempool levels or some other performance metric - if not solely, then in addition to these estimates? Additionally, I have had issues with contract calls like mint-many where the smart contract is, in fact, performing multiple transactions and thus requires more gas. We perhaps need a methodology for developers to be able to either flag these types of calls to be recognized by the wallet, or to override defaults. If this work is already done, that's great, but I do think it deserves further discussion, perhaps in a new issue. |
Beta Was this translation helpful? Give feedback.
-
@314159265359879 That would be an interesting idea. Keep in mind that there are different levels to what is going on behind the scenes. So, we could return the 5th percentile at one level, and then minimum fee at another level. |
Beta Was this translation helpful? Give feedback.
-
@falkonprods That is a good point. We are using these past-data estimates as a kind of "85% solution", because it might be harder to use the mempool to make the estimates. I think discussion can continue on how to use the mempool instead. |
Beta Was this translation helpful? Give feedback.
-
@gregorycoppola to be fair, I think this is a good long term solution, the question is how do we first normalize the gas fees to accurate levels. |
Beta Was this translation helpful? Give feedback.
-
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Beta Was this translation helpful? Give feedback.
-
lets keep this active. MattySTX mentioned in the SIP call today that his latest report on Stacking and Mining indicates there isn't a functioning fee market yet: The number of transactions has increased while the total fees payed has decreased. Likely because blocks are not fully utilized yet. As the network grows and the number of transactions increases, improved fee estimations will be important for a functioning fee market to establish. When the speed of L1 blocks goes up by a lot (as looked into by the working group), it may be more important to use the mempool as the basis for the estimations? |
Beta Was this translation helpful? Give feedback.
-
Assigning to @kantai for now. Please re-assign as you see fit. |
Beta Was this translation helpful? Give feedback.
-
I think this should be converted to a discussion: "improving fee estimation" is a path, not a destination. The fee rate estimator did improve since this issue opened, but more improvements can be done. However, specific improvements should get specific issues, and specific issue reports should get specific issues. The behaviors that spawned this particular issue have been addressed (spiking fee estimations). Unless someone feels strongly that this should remain an issue rather than a discussion, I'll hit that convert button on Monday. |
Beta Was this translation helpful? Give feedback.
-
I have done some work exploring this here: https://stacksonchain.com/dashboards/Gas-Fees-Mempool/186 I assume miners prioritize by fee rate per tx size, but I have not checked if or how this is actually done in the node software. For simplicity I estimate the fees for a standard STX transfer which is around 180 bytes. I reproduce the existing fee estimate like this:
I created an alternative heuristic fee estimate based on mempool analysis like this:
In summary, this does the following:
Caveats:
At time of writing this is the result for the existing estimate:
And this is the result for the new estimate:
(Sometimes the "normal" result comes out as zero and I'm not really sure why.) |
Beta Was this translation helpful? Give feedback.
-
The current fee estimator looks at mined blocks, and ranks the transactions in the block by their fee rate, taking the 5th, 50th, and 95th percentiles, and uses those three values as fee rate estimates.
This can lead to some problems -- it seems like very high fee estimates can be commonly obtained when STX transfers are broadcasted with fee amounts around 1 STX (STX transfers are much cheaper operations than contract-calls, like 50-100x cheaper, so when most transfers use fees around 1 STX, they imply that contract-calls should be 50-100 STX).
Beta Was this translation helpful? Give feedback.
All reactions