Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question regarding DA.MaxBlobSize #47

Open
pepyakin opened this issue Feb 20, 2024 · 3 comments
Open

Question regarding DA.MaxBlobSize #47

pepyakin opened this issue Feb 20, 2024 · 3 comments

Comments

@pepyakin
Copy link
Contributor

pepyakin commented Feb 20, 2024

go-da/da.go

Lines 7 to 8 in 011ba69

// MaxBlobSize returns the max blob size
MaxBlobSize(ctx context.Context) (uint64, error)

It is not clear what is permited inside of the implementation. Both concrete implementations (namely, at the time of writing, avail and celestia) return a constant number.

However, error in the return type suggests it may be possible to implement this function via an RPC. Also, it's not entirely clear when this request is going to be called: once or before every submission? If the former, what would be recommendations to support the DA layers with [governance] configurable blob sizes.

@nashqueue
Copy link
Member

Also, it's not entirely clear when this request is going to be called: once or before every submission?

For Rollkit specifically or rollups in general ?

If the former, what would be recommendations to support the DA layers with [governance] configurable blob sizes.

If the MaxBlobSize changes over time you should be able to query it dynamically. The implementation should allow that. Celestia might change the blocksize which would change the MaxBlobSize.

@pepyakin
Copy link
Contributor Author

To contextualize, I am asking those questions with a DA layer implementer hat on, and the MaxBlobSize feels a bit underspecified.

Ok, so it seems that an RPC implementation is permitted and MaxBlobSize is allowed to return different values. However, in the present form it has some edge cases, mostly TOCTOU:

  1. If the caller calls MaxBlobSize periodically, it's possible that the config changed beforehand and the next submit would happen with the stale blob size limit.
  2. If the caller calls MaxBlobSize before every submission, it is still possible that the limit changes while the blobs are in flight.

Those edge cases can only be triggered if MaxBlobSize is decreased. At a glance, it seems to be a good idea to verbalize this contract in the docs as:

// MaxBlobSize returns the max blob size.
//
// The underlying value can change over time and the client can invoke this function 
// multiple times (TODO: be more specific on when) over the lifetime of this object. 
// That said, the maximum size is not allowed to decrease.
MaxBlobSize(ctx context.Context) (uint64, error) 

But under a closer inspection, this interface either lies or essentially imposes constraints on the chain's governance.

@nashqueue
Copy link
Member

Great comment. I think you bring up a valid point here. I would say that a blob size change should be made infrequently in general, which makes changing it possibly a breaking change.

Another thing I see as possible is what if the block time decreases, but the throughput stays the same for the DA-Layer. That could potentially decrease the maxblob size.

From an economic perspective, filling up the full block might be inefficient during congestion as you compete with a variance of fees.

Going back to your attack, what if the interface has a current and future Max blob size so Rollup implementation can be prepared for it? The only constraint that you would have to put on now is the allowed frequency of change, which will be the exact time how often the caller has to check for it.

Do you have a proposed change to the interface that would make dynamic max blob sizes more feasible?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: No status
Development

No branches or pull requests

2 participants