-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: extends stamps capacity #361
Conversation
65dcada
to
9fc0689
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Few general things we need to address before moving forward.
Firstly, you are treating this feature as another exclusive mode of operation of gateway-proxy's stamp management. I don't think that is correct though. The whole gateway-proxy layout of features is getting quite complicated and once Attila is back we really should think of some better architecture for this. But in meanwhile...
What I see is that there is sort of two modes of the postage management. Either it would make sense to run it with autobuy
enabled, which basically allows uploading content to Swarm that has a limited life-span on the network and the extending modes then does not really make sense (at least to me, feel free to come up with use-case proving me wrong). Or, you want to make sure that the uploaded stays in some form on the network, for which there are the extending stamp's features.
So, basically, I can see the possibility that when stamps management is not running in hardcoded
or autobuy
it can run with both extendsTTL
and extendsCapacity
modes... I think we should support this use case.
Secondly, we have quite neglected testing in your previous PRs. We should fix that later on, but lets start with this PR first. Vojtech created quite a nice tooling for testing of the StampsManager (eq. you have stamps DB, you can set their values etc.), so please use it in order to test thoroughly the (edge) cases of the extendsCapacity
mode. Also, there should be maybe some tests that cover the mentioned interoperability of the extendsCapacity
and extendsTtl
modes. Lastly, please structure the tests so that each mode has its own describe
section.
6367e1d
to
8f2f4c1
Compare
4169a3a
to
de6f7bf
Compare
|
||
const stamp = await buyNewStamp(defaultDepth, defaultAmount, beeDebug) | ||
|
||
await topUpStamp(beeDebug, stamp.batchId, (Number(stamp.stamp.amount) * 2).toString()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also as a general comment (maybe concerns @agazso the most), we would need an atomic topup + dilute operation on blockchain level in order to make this feature solid, otherwise one of these operations can fail (http network issue, gas issue, non-graceful shutdown, running out of funds) making stamps grow asymmetrically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, meant these comments for src/stamps.ts
, not the test 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Cafe137 I already test the additive property of topup call, and yes, no need to double up the amount
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, good catch @Cafe137 about the additive side!
And yes, agree about the atomicity problem here, I am afraid though that with the current Bee API this is not possible for us to handle :-( ethersphere/bee#3359 would help us with this.
I am running with this env:
Have a clean Bee, no stamps. And it tries to buy a new stamp:
Which fails due to missing envs
But I think it should not buy stamps at all |
I want to run only I have this env
Calls made to Bee: By
By
|
Also another observation when
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am sorry, but the code gets too messy beyond what I can understand and reason about. But the fact that we have cleared out how the Stamps Manager can operate (eq. that autobuy and extends functionality are exclusive), gives me an idea of how to improve the situation. Let's discuss it today on the call later on.
There are several other smaller issues I have identified. One of the major ones is regarding the isNaN
check and generally input validation, which we should tackle (as per #393 issue).
Then there is the testing. I am sorry but the tests you have are very non-sufficient, we will have to improve that to ensure that the higher complexity that this project grows is working correctly. Lets discuss it on the call as well so we can get aligned with what I expect here.
|
||
const stamp = await buyNewStamp(defaultDepth, defaultAmount, beeDebug) | ||
|
||
await topUpStamp(beeDebug, stamp.batchId, (Number(stamp.stamp.amount) * 2).toString()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, good catch @Cafe137 about the additive side!
And yes, agree about the atomicity problem here, I am afraid though that with the current Bee API this is not possible for us to handle :-( ethersphere/bee#3359 would help us with this.
Closes #311
Basically, instead of buying a new postage stamp every time the usage threshold is reached with this feature, it would be possible to extend the capacity (depth) of the postage stamp using the dilute endpoint.