-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PRS and ERS don't promote replicas taking backups #16997
base: main
Are you sure you want to change the base?
PRS and ERS don't promote replicas taking backups #16997
Conversation
Signed-off-by: Eduardo J. Ortega U <[email protected]>
Review ChecklistHello reviewers! 👋 Please follow this checklist when reviewing this Pull Request. General
Tests
Documentation
New flags
If a workflow is added or modified:
Backward compatibility
|
Signed-off-by: Eduardo J. Ortega U <[email protected]>
Signed-off-by: Eduardo J. Ortega U <[email protected]>
Signed-off-by: Eduardo J. Ortega U <[email protected]>
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #16997 +/- ##
==========================================
+ Coverage 67.31% 67.32% +0.01%
==========================================
Files 1569 1570 +1
Lines 252502 252762 +260
==========================================
+ Hits 169964 170182 +218
- Misses 82538 82580 +42 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be good to include an end-to-end test for this covering several cases: only 1 replica in the cluster, and more than one replica, all replicas being backed up, etc.
Moreover, even though this is not a breaking change per se, we should still document it in the v22.0
release notes. Which should be put in ./changelog/22.0/22.0.0/summary.md
, the file does not exist yet.
Signed-off-by: Eduardo J. Ortega U <[email protected]>
While at it, also fix goimports in a couple of files. Signed-off-by: Eduardo J. Ortega U <[email protected]>
Signed-off-by: Eduardo J. Ortega U <[email protected]>
@GuptaManan100 's suggestion has been applied, please have another look |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The implementation seems correct. My comments are mostly about naming things etc.
go/vt/vtctl/reparentutil/util.go
Outdated
// returns it. It is safe to call from multiple goroutines. | ||
func findPositionAndLagForTablet(ctx context.Context, tablet *topodatapb.Tablet, logger logutil.Logger, tmc tmclient.TabletManagerClient, waitTimeout time.Duration) (replication.Position, time.Duration, error) { | ||
func findPositionLagBackingUpForTablet(ctx context.Context, tablet *topodatapb.Tablet, logger logutil.Logger, tmc tmclient.TabletManagerClient, waitTimeout time.Duration) (replication.Position, time.Duration, bool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
func findPositionLagBackingUpForTablet(ctx context.Context, tablet *topodatapb.Tablet, logger logutil.Logger, tmc tmclient.TabletManagerClient, waitTimeout time.Duration) (replication.Position, time.Duration, bool, error) { | |
func findPositionLagAndBackupStatusForTablet(ctx context.Context, tablet *topodatapb.Tablet, logger logutil.Logger, tmc tmclient.TabletManagerClient, waitTimeout time.Duration) (replication.Position, time.Duration, bool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can come up with some shorter function name for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I settled for findTabletPositionLagBackupStatus()
, I hope it's acceptable
Signed-off-by: Eduardo J. Ortega U <[email protected]>
Signed-off-by: Eduardo J. Ortega U <[email protected]>
Signed-off-by: Eduardo J. Ortega U <[email protected]>
auto code gen Signed-off-by: Eduardo J. Ortega U <[email protected]>
Signed-off-by: Eduardo J. Ortega U <[email protected]>
Signed-off-by: Eduardo J. Ortega U <[email protected]>
@GuptaManan100 / @ejortegau: I can't see why this would change this behaviour, but I wanted to double check that this ERS behaviour remains true after this change:
To be specific, if the backing-up And could any of the backup engines prevent this from happening? For example, some engines take out some heavy mysql-level locks 🤔 |
The backuping replicas are still viable candidates. We don't remove them from the list even now. It is just an extra piece of information to use to sort the tablets. If we have 2 equally advanced tablets (in terms of position), then the one taking the backup will be lower down the list of candidates. That being said, if a backing up replica is the most advanced tablet there is, we will still promote it. |
@GuptaManan100 at least in our production this would be undesirable. We had this happen on one shard (the backup node was most advanced, surprisingly) and the overhead of the backup was enough to impact primary operations I think it would be more ideal for the 2nd-most-current replica to be promoted in this scenario and whatever magic we have to "catch up to the most advanced replica" would be used to catch up to the backing up node Without this behaviour I think we are open to the same scenario/risks that prompted this support to be added. cc @ejortegau for thoughts here |
If backup operations impact the tablet from being a primary at all, then we should then decide to never promote it. We can do that by filtering the candidates out in If we make this change, then the tablet will be used to get another tablet caught up, but it won't be elected itself. However, with these changes, if we find ourselves in a situation such that the only viable tablet is taking a backup, we will not promote it. |
I think this falls back to what behavior we want for the two different cases. Using This leaves open what should be done for PRS, since PRS logic does not have ideal source selection after picking up the most up to date one as far as I can see. We could do what I proposed here for that, i.e. fail the PRS with an error that allows the operator to decide whether they will cancel the backup or pick a different source. Thoughts? |
@ejortegau Yes, we can go back to the original version, I think that would do what we want it to. As far as PRS is concerned, the primary is online, there is no need for a multi-step promotion. Any candidate that we select, can be promoted. We wait for it to catch up to the primary. So, in PRS, it would be enough to remove these tablets from consideration in |
Have PRS remove hosts taking backups from consideration; and ERS only consider them if there are no other valid candidates that are not taking backups. Signed-off-by: Eduardo J. Ortega U <[email protected]>
Signed-off-by: Eduardo J. Ortega U <[email protected]>
@@ -58,7 +58,8 @@ const ( | |||
// cell as the current primary, and to be different from avoidPrimaryAlias. The | |||
// tablet with the most advanced replication position is chosen to minimize the | |||
// amount of time spent catching up with the current primary. Further ties are | |||
// broken by the durability rules. | |||
// broken by the durability rules. Tablets taking backups are excluded from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ejortegau I think this comment is no longer accurate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it's still accurate. It reflects the fact that PRS will not promote a tablet that is taking a backup.
I have updated the PR to do the following:
Please have another look. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one comment change, rest looks good to me!
Signed-off-by: Eduardo J. Ortega U <[email protected]>
Description
This PR changes ERS and PRS so that they prefer not promoting hosts that are currently taking backups.
The implementation follows what was suggesteed in the RFC of the issue (link below). Namely, the RPCs used to get information about candidates now include an extra field indicating whether they are running backups or not; and that is
used to order the list of promotion candidates.
Related Issue(s)
#16558
Checklist
Deployment Notes
N/A