-
Notifications
You must be signed in to change notification settings - Fork 595
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(meta): support database checkpoint isolation #19173
Conversation
739a800
to
07a21a7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rest LGTM.
Just had a test locally. I created a separate database, and create a blackhole sink that will sleep for 1s in every barrier to mock the scenario of slowness. In the original database, the e2e test can be run successfully without being blocked by the slowness, which tentatively proves the functionality of database checkpoint isolation. Will add more tests in later PR. |
I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.
What's changed and what's your intention?
In this PR, we support checkpoint isolation between different databases.
Previously, in global barrier worker in meta node, we have a struct
CheckpointControl
to manage the barrier state of the whole global streaming graph. To support checkpoint isolation between different databases, the global streaming graph will be divided into per-database streaming graph. The previousCheckpointControl
is renamed toDatabaseCheckpointControl
, which tracks the barrier state of the streaming graph of the database. In recovery, we will divide the runtime information for different databases.Each database will have its own streaming graph that independently injects and collects barrier. The
partial_graph_id
inStreamingControlRequest
is changed from u32 to u64. Previously, the partial graph id is the u32 job id for creating snapshot backfill jobs, and u32::MAX for the global streaming graph. In this PR, the u64 partial graph id will be composed from database id and job id. The high 32 bits will store the database id, and the low 32 bits will store the job id (for the partial graph of created jobs, it's u32::MAX).Checklist
./risedev check
(or alias,./risedev c
)Documentation
Release note
If this PR includes changes that directly affect users or other significant modifications relevant to the community, kindly draft a release note to provide a concise summary of these changes. Please prioritize highlighting the impact these changes will have on users.