Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

header/store: Append operation seems to be slow/bottlenecked #861

Closed
renaynay opened this issue Jun 27, 2022 · 4 comments
Closed

header/store: Append operation seems to be slow/bottlenecked #861

renaynay opened this issue Jun 27, 2022 · 4 comments
Assignees
Labels
area:header Extended header area:storage bug Something isn't working

Comments

@renaynay
Copy link
Member

renaynay commented Jun 27, 2022

Specs

Remote
Ubuntu 20.04 (LTS) x64
1 vCPU
1GB / 25GB Disk

Local
Total Number of Cores: 8 (4 performance and 4 efficiency)
Memory: 16 GB

Version

Semantic version: v0.3.0-rc1-74-g7f70b9b
Commit: 7f70b9b
Build Date: Mon Jun 27 11:59:20 CEST 2022
System version: arm64/darwin
Golang version: go1.18

Problem

For both machines I've specified above, the header store sometimes seems to be extremely slow on Append, consequently bottlenecking other dependent services (such as the DASer's catchUp routine) and causing the queue of pending headers to be written to grow on the syncer's side (since the store's pending header batch is limited to the DefaultWriteBatchSize = 2048).

DASer catchUp routine is bottlenecked by the store's ability to provide a header at <height>. If the height is not yet available, the request will hang until the header store can serve that header. The header store can only serve the header if it is either in the store's pending header queue (waiting to be written to disk), or if the header is already stored to disk. If neither of those criteria are satisfied, the store will "subscribe" to that height and wait until the header at <height> is at least inside of the store's pending queue.

The issue is that the header store becomes bottlenecked on the Append operation, sometimes with a pending queue of 30,000+ headers to be written to disk (the syncer will log pending head at height 82744 for example, but last update to the header store's chain is at 55297). As I observe this bottleneck, I also see the memory usage increase slowly, but steadily.

@renaynay renaynay added bug Something isn't working area:header Extended header area:storage labels Jun 27, 2022
@renaynay renaynay moved this to TODO in Celestia Node Jun 27, 2022
@Wondertan
Copy link
Member

This is just fetching of the headers being slow.

@renaynay
Copy link
Member Author

Had a debugging/conversation with @Wondertan:

The issue seems to be not on Append but on the actually fetching of batched headers.

Things we can do to improve, regardless:

@renaynay
Copy link
Member Author

Closing for now

Repository owner moved this from TODO to Done in Celestia Node Jun 27, 2022
@Wondertan
Copy link
Member

FYI, Viet confirmed two Bridge Node bootsrappers are not syncing, which explains slow sync.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:header Extended header area:storage bug Something isn't working
Projects
No open projects
Archived in project
Development

No branches or pull requests

2 participants