From 0a9179fca50acd5c8cd8de572275ffa8e99baf25 Mon Sep 17 00:00:00 2001 From: Arseny Sher Date: Fri, 3 Jan 2025 19:52:08 +0300 Subject: [PATCH] Add some details to rfc --- ...35-safekeeper-dynamic-membership-change.md | 196 +++++++++++------- 1 file changed, 126 insertions(+), 70 deletions(-) diff --git a/docs/rfcs/035-safekeeper-dynamic-membership-change.md b/docs/rfcs/035-safekeeper-dynamic-membership-change.md index 239ec58186c3..cea9af34abaf 100644 --- a/docs/rfcs/035-safekeeper-dynamic-membership-change.md +++ b/docs/rfcs/035-safekeeper-dynamic-membership-change.md @@ -81,7 +81,7 @@ configuration generation in them is less than its current one. Namely, it refuses to vote, to truncate WAL in `handle_elected` and to accept WAL. In response it sends its current configuration generation to let walproposer know. -Safekeeper gets `PUT /v1/tenants/{tenant_id}/timelines/{timeline_id}/configuration` +Safekeeper gets `PUT /v1/tenants/{tenant_id}/timelines/{timeline_id}/configuration` accepting `Configuration`. Safekeeper switches to the given conf it is higher than its current one and ignores it otherwise. In any case it replies with ``` @@ -103,7 +103,7 @@ currently and tries to communicate with all of them. However, the list does not define consensus members. Instead, on start walproposer tracks highest configuration it receives from `AcceptorGreeting`s. Once it assembles greetings from majority of `sk_set` and majority of `new_sk_set` (if it is present), it -establishes this configuration as its own and moves to voting. +establishes this configuration as its own and moves to voting. It should stop talking to safekeepers not listed in the configuration at this point, though it is not unsafe to continue doing so. @@ -119,7 +119,7 @@ refusal to accept due to configuration change) it simply restarts. The following algorithm can be executed anywhere having access to configuration storage and safekeepers. It is safe to interrupt / restart it and run multiple instances of it concurrently, though likely one of them won't make -progress then. It accepts `desired_set: Vec` as input. +progress then. It accepts `desired_set: Vec` as input. Algorithm will refuse to make the change if it encounters previous interrupted change attempt, but in this case it will try to finish it. @@ -140,7 +140,7 @@ storage are reachable. safe. Failed CAS aborts the procedure. 4) Call `PUT` `configuration` on safekeepers from the current set, delivering them `joint_conf`. Collecting responses from majority is required - to proceed. If any response returned generation higher than + to proceed. If any response returned generation higher than `joint_conf.generation`, abort (another switch raced us). Otherwise, choose max `` among responses and establish it as (in memory) `sync_position`. Also choose max `term` and establish it as (in @@ -149,49 +149,49 @@ storage are reachable. without ack from the new set. Similarly, we'll bump term on new majority to `sync_term` so that two computes with the same term are never elected. 4) Initialize timeline on safekeeper(s) from `new_sk_set` where it - doesn't exist yet by doing `pull_timeline` from the majority of the + doesn't exist yet by doing `pull_timeline` from the majority of the current set. Doing that on majority of `new_sk_set` is enough to proceed, but it is reasonable to ensure that all `new_sk_set` members are initialized -- if some of them are down why are we migrating there? -5) Call `POST` `bump_term(sync_term)` on safekeepers from the new set. +5) Call `POST` `bump_term(sync_term)` on safekeepers from the new set. Success on majority is enough. 6) Repeatedly call `PUT` `configuration` on safekeepers from the new set, delivering them `joint_conf` and collecting their positions. This will - switch them to the `joint_conf` which generally won't be needed + switch them to the `joint_conf` which generally won't be needed because `pull_timeline` already includes it and plus additionally would be broadcast by compute. More importantly, we may proceed to the next step - only when `` on the majority of the new set reached - `sync_position`. Similarly, on the happy path no waiting is not needed because + only when `` on the majority of the new set reached + `sync_position`. Similarly, on the happy path no waiting is not needed because `pull_timeline` already includes it. However, we should double check to be safe. For example, timeline could have been created earlier e.g. - manually or after try-to-migrate, abort, try-to-migrate-again sequence. -7) Create `new_conf: Configuration` incrementing `join_conf` generation and having new - safekeeper set as `sk_set` and None `new_sk_set`. Write it to configuration + manually or after try-to-migrate, abort, try-to-migrate-again sequence. +7) Create `new_conf: Configuration` incrementing `join_conf` generation and having new + safekeeper set as `sk_set` and None `new_sk_set`. Write it to configuration storage under one more CAS. 8) Call `PUT` `configuration` on safekeepers from the new set, - delivering them `new_conf`. It is enough to deliver it to the majority + delivering them `new_conf`. It is enough to deliver it to the majority of the new set; the rest can be updated by compute. I haven't put huge effort to make the description above very precise, because it is natural language prone to interpretations anyway. Instead I'd like to make TLA+ spec of it. -Description above focuses on safety. To make the flow practical and live, here a few more +Description above focuses on safety. To make the flow practical and live, here a few more considerations. -1) It makes sense to ping new set to ensure it we are migrating to live node(s) before +1) It makes sense to ping new set to ensure it we are migrating to live node(s) before step 3. -2) If e.g. accidentally wrong new sk set has been specified, before CAS in step `6` is completed +2) If e.g. accidentally wrong new sk set has been specified, before CAS in step `6` is completed it is safe to rollback to the old conf with one more CAS. -3) On step 4 timeline might be already created on members of the new set for various reasons; +3) On step 4 timeline might be already created on members of the new set for various reasons; the simplest is the procedure restart. There are more complicated scenarious like mentioned - in step 5. Deleting and re-doing `pull_timeline` is generally unsafe without involving - generations, so seems simpler to treat existing timeline as success. However, this also + in step 5. Deleting and re-doing `pull_timeline` is generally unsafe without involving + generations, so seems simpler to treat existing timeline as success. However, this also has a disadvantage: you might imagine an surpassingly unlikely schedule where condition in the step 5 is never reached until compute is (re)awaken up to synchronize new member(s). I don't think we'll observe this in practice, but can add waking up compute if needed. 4) In the end timeline should be locally deleted on the safekeeper(s) which are in the old set but not in the new one, unless they are unreachable. To be - safe this also should be done under generation number (deletion proceeds only if + safe this also should be done under generation number (deletion proceeds only if current configuration is <= than one in request and safekeeper is not memeber of it). 5) If current conf fetched on step 1 is already not joint and members equal to `desired_set`, jump to step 7, using it as `new_conf`. @@ -202,47 +202,87 @@ The procedure ought to be driven from somewhere. Obvious candidates are control plane and storage_controller; and as each of them already has db we don't want yet another storage. I propose to manage safekeepers in storage_controller because 1) since it is in rust it simplifies simulation testing (more on this -below) 2) it already manages pageservers. +below) 2) it already manages pageservers. This assumes that migration will be fully usable only after we migrate all tenants/timelines to storage_controller. It is discussible whether we want also to manage pageserver attachments for all of these, but likely we do. -This requires us to define storcon <-> cplane interface. +This requires us to define storcon <-> cplane interface and changes. -### storage_controller <-> control plane interface +### storage_controller <-> control plane interface and changes First of all, control plane should [change](https://neondb.slack.com/archives/C03438W3FLZ/p1719226543199829) storing safekeepers per timeline instead of per tenant because we can't migrate -tenants atomically. +tenants atomically. The important question is how updated configuration is delivered from storage_controller to control plane to provide it to computes. As always, there are two options, pull and push. Let's do it the same push as with pageserver `/notify-attach` because 1) it keeps storage_controller out of critical compute -start path 2) provides easier upgrade: there won't be such a thing as 'timeline -managed by control plane / storcon', cplane just takes the value out of its db -when needed 3) uniformity. It makes storage_controller responsible for retrying notifying -control plane until it succeeds. - -So, cplane `/notify-safekeepers` for the timeline accepts `Configuration` and -updates it in the db if the provided conf generation is higher (the cplane db -should also store generations for this). Similarly to [`/notify-attach`](https://www.notion.so/neondatabase/Storage-Controller-Control-Plane-interface-6de56dd310a043bfa5c2f5564fa98365), it -should update db which makes the call successful, and then try to schedule -`apply_config` if possible, it is ok if not. storage_controller -should rate limit calling the endpoint, but likely this won't be needed, as migration +start path 2) uniformity. It makes storage_controller responsible for retrying +notifying control plane until it succeeds. + +It is not needed for the control plane to fully know the `Configuration`. It is +enough for it to only to be aware of the list of safekeepers in the latest +configuration to supply it to compute, plus associated generation number to +protect from stale update requests and to also pass it to compute. + +So, cplane `/notify-safekeepers` for the timeline can accept JSON like +``` +{ + tenant_id: String, + timeline_id: String, + generation: u32, + safekeepers: Vec, +} +``` +where `SafekeeperId` is +``` +{ + node_id: u64, + host: String +} +``` +In principle `host` is redundant, but may be useful for observability. + +The request updates list of safekeepers in the db if the provided conf +generation is higher (the cplane db should also store generations for this). +Similarly to +[`/notify-attach`](https://www.notion.so/neondatabase/Storage-Controller-Control-Plane-interface-6de56dd310a043bfa5c2f5564fa98365), +it should update db which makes the call successful, and then try to schedule +`apply_config` if possible, it is ok if not. storage_controller should rate +limit calling the endpoint, but likely this won't be needed, as migration throughput is limited by `pull_timeline`. Timeline (branch) creation in cplane should call storage_controller POST `tenant/:tenant_id/timeline` like it currently does for sharded tenants. -Response should be augmented with `safekeeper_conf: Configuration`. The call -should be retried until succeeds. +Response should be augmented with `safekeepers_generation` and `safekeepers` +fields like described in `/notify-safekeepers` above. Initially (currently) +these fields may be absent; in this case cplane chooses safekeepers on its own +like it currently does. The call should be retried until succeeds. Timeline deletion and tenant deletion in cplane should call appropriate storage_controller endpoints like it currently does for sharded tenants. The calls should be retried until they succeed. +When compute receives safekeepers list from control plane it needs to know the +generation to checked whether it should be updated (note that compute may get +safekeeper list from either cplane or safekeepers). Currently `neon.safekeepers` +GUC is just a comma separates list of `host:port`. Let's prefix it with +`g#:` to this end, so it will look like +``` +g#42:safekeeper-0.eu-central-1.aws.neon.tech:6401,safekeeper-2.eu-central-1.aws.neon.tech:6401,safekeeper-1.eu-central-1.aws.neon.tech:6401 +``` + +To summarize, list of cplane changes: +- per tenant -> per timeline safekeepers management and addition of int `safekeeper_generation` field. +- `/notify-safekeepers` endpoint. +- Branch creation call may return list of safekeepers and when it is + present cplane should adopt it instead of choosing on its own like it does currently. +- `neon.safekeepers` GUC should be prefixed with `g#:`. + ### storage_controller implementation Current 'load everything on startup and keep in memory' easy design is fine. @@ -360,10 +400,10 @@ source safekeeper might fail, which is not a problem if we are going to decomission the node but leaves garbage otherwise. I'd propose in the first version 1) Don't attempt deletion at all if node status is `offline`. 2) If it failed, just issue warning. -And add PUT `/control/v1/safekeepers/:node_id/scrub` endpoint which would find and -remove garbage timelines for manual use. It will 1) list all timelines on the -safekeeper 2) compare each one against configuration storage: if timeline -doesn't exist at all (had been deleted), it can be deleted. Otherwise, it can +And add PUT `/control/v1/safekeepers/:node_id/scrub` endpoint which would find and +remove garbage timelines for manual use. It will 1) list all timelines on the +safekeeper 2) compare each one against configuration storage: if timeline +doesn't exist at all (had been deleted), it can be deleted. Otherwise, it can be deleted under generation number if node is not member of current generation. Automating this is untrivial; we'd need to register all potential missing @@ -412,8 +452,8 @@ There should be following layers of tests: 3) Since simulation testing injects at relatively high level points (not syscalls), it omits some code, in particular `pull_timeline`. Thus it is better to have basic tests covering whole system as well. Extended version of - `test_restarts_under_load` would do: start background load and do migration - under it, then restart endpoint and check that no reported commits + `test_restarts_under_load` would do: start background load and do migration + under it, then restart endpoint and check that no reported commits had been lost. I'd also add one more creating classic network split scenario, with one compute talking to AC and another to BD while migration from nodes ABC to ABD happens. @@ -422,35 +462,51 @@ There should be following layers of tests: ## Order of implementation and rollout -Note that +Note that - Control plane parts and integration with it is fully independent from everything else (tests would use simulation and neon_local). +- It is reasonable to make compute <-> safekeepers protocol change + independent of enabling generations. - There is a lot of infra work making storage_controller aware of timelines and safekeepers and its impl/rollout should be separate from migration itself. -- Initially walproposer can just stop working while it observers joint configuration. +- Initially walproposer can just stop working while it observes joint configuration. Such window would be typically very short anyway. - -To rollout smoothly, both walproposer and safekeeper should have flag -`configurations_enabled`; when set to false, they would work as currently, i.e. -walproposer is able to commit on whatever safekeeper set it is provided. Until -all timelines are managed by storcon we'd need to use current script to migrate -and update/drop entries in the storage_controller database if it has any. - -Safekeepers would need to be able to talk both current and new protocol version -with compute to reduce number of computes restarted in prod once v2 protocol is -deployed (though before completely switching we'd need to force this). - -Let's have the following rollout order: -- storage_controller becomes aware of safekeepers; -- storage_controller gets timeline creation for new timelines and deletion requests, but - doesn't manage all timelines yet. Migration can be tested on these new timelines. - To keep control plane and storage_controller databases in sync while control - plane still chooses the safekeepers initially (until all timelines are imported - it can choose better), `TimelineCreateRequest` can get optional safekeepers - field with safekeepers chosen by cplane. -- Then we can import all existing timelines from control plane to - storage_controller and gradually enable configurations region by region. - +- Obviously we want to test the whole thing thoroughly on staging and only then + gradually enable in prod. + +Let's have the following implementation bits for gradual rollout: +- compute gets `neon.safekeepers_proto_version` flag. + Initially both compute and safekeepers will be able to talk both + versions so that we can delay force restart of them and for + simplicity of rollback in case it is needed. +- storcon gets `-set-safekeepers` config option disabled by + default. Timeline creation request chooses safekeepers + (and returns them in response to cplane) only when it is set to + true. +- control_plane [see above](storage_controller-<->-control-plane interface-and-changes) + prefixes `neon.safekeepers` GUC with generation number. When it is 0 + (or prefix not present at all), walproposer behaves as currently, committing on + the provided safekeeper list -- generations are disabled. + If it is non 0 it follows this RFC rules. +- We provide a script for manual migration to storage controller. + It selects timeline(s) from control plane (specified or all of them) db + and calls special import endpoint on storage controller which is very + similar to timeline creation: it inserts into the db, sets + configuration to initial on the safekeepers, calls cplane + `notify-safekeepers`. + +Then the rollout for a region would be: +- Current situation: safekeepers are choosen by control_plane. +- We manually migrate some timelines, test moving them around. +- Then we enable `--set-safekeepers` so that all new timelines + are on storage controller. +- Finally migrate all existing timelines using the script (no + compute should be speaking old proto version at this point). + +Until all timelines are managed by storcon we'd need to use current ad hoc +script to migrate if needed. To keep state clean, all storage controller managed +timelines must be migrated before that, or controller db and configurations +state of safekeepers dropped manually. Very rough implementation order: - Add concept of configurations to safekeepers (including control file), @@ -458,10 +514,10 @@ Very rough implementation order: - Implement walproposer changes, including protocol. - Implement storconn part. Use it in neon_local (and pytest). - Make cplane store safekeepers per timeline instead of per tenant. -- Implement cplane/storcon integration. Route branch creation/deletion +- Implement cplane/storcon integration. Route branch creation/deletion through storcon. Then we can test migration of new branches. -- Finally import existing branches. Then we can drop cplane - safekeeper selection code. Gradually enable configurations at +- Finally import existing branches. Then we can drop cplane + safekeeper selection code. Gradually enable configurations at computes and safekeepers. Before that, all computes must talk only v3 protocol version.