Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PRS and ERS don't promote replicas taking backups #16997

Open
wants to merge 24 commits into
base: main
Choose a base branch
from

Conversation

ejortegau
Copy link
Contributor

@ejortegau ejortegau commented Oct 18, 2024

Description

This PR changes ERS and PRS so that they prefer not promoting hosts that are currently taking backups.

The implementation follows what was suggesteed in the RFC of the issue (link below). Namely, the RPCs used to get information about candidates now include an extra field indicating whether they are running backups or not; and that is
used to order the list of promotion candidates.

Related Issue(s)

#16558

Checklist

  • "Backport to:" labels have been added if this change should be back-ported to release branches
  • If this change is to be back-ported to previous releases, a justification is included in the PR description
  • Tests were added or are not required
  • Did the new or modified tests pass consistently locally and on CI?
  • Documentation was added or is not required

Deployment Notes

N/A

Copy link
Contributor

vitess-bot bot commented Oct 18, 2024

Review Checklist

Hello reviewers! 👋 Please follow this checklist when reviewing this Pull Request.

General

  • Ensure that the Pull Request has a descriptive title.
  • Ensure there is a link to an issue (except for internal cleanup and flaky test fixes), new features should have an RFC that documents use cases and test cases.

Tests

  • Bug fixes should have at least one unit or end-to-end test, enhancement and new features should have a sufficient number of tests.

Documentation

  • Apply the release notes (needs details) label if users need to know about this change.
  • New features should be documented.
  • There should be some code comments as to why things are implemented the way they are.
  • There should be a comment at the top of each new or modified test to explain what the test does.

New flags

  • Is this flag really necessary?
  • Flag names must be clear and intuitive, use dashes (-), and have a clear help text.

If a workflow is added or modified:

  • Each item in Jobs should be named in order to mark it as required.
  • If the workflow needs to be marked as required, the maintainer team must be notified.

Backward compatibility

  • Protobuf changes should be wire-compatible.
  • Changes to _vt tables and RPCs need to be backward compatible.
  • RPC changes should be compatible with vitess-operator
  • If a flag is removed, then it should also be removed from vitess-operator and arewefastyet, if used there.
  • vtctl command output order should be stable and awk-able.

@vitess-bot vitess-bot bot added NeedsBackportReason If backport labels have been applied to a PR, a justification is required NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsIssue A linked issue is missing for this Pull Request NeedsWebsiteDocsUpdate What it says labels Oct 18, 2024
@github-actions github-actions bot added this to the v22.0.0 milestone Oct 18, 2024
Signed-off-by: Eduardo J. Ortega U <[email protected]>
Signed-off-by: Eduardo J. Ortega U <[email protected]>
Signed-off-by: Eduardo J. Ortega U <[email protected]>
Copy link

codecov bot commented Oct 18, 2024

Codecov Report

Attention: Patch coverage is 93.02326% with 3 lines in your changes missing coverage. Please review.

Project coverage is 67.32%. Comparing base (469bdcc) to head (97c0271).
Report is 10 commits behind head on main.

Files with missing lines Patch % Lines
go/vt/vttablet/tabletmanager/rpc_backup.go 0.00% 2 Missing ⚠️
go/vt/vtctl/reparentutil/util.go 92.30% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main   #16997      +/-   ##
==========================================
+ Coverage   67.31%   67.32%   +0.01%     
==========================================
  Files        1569     1570       +1     
  Lines      252502   252762     +260     
==========================================
+ Hits       169964   170182     +218     
- Misses      82538    82580      +42     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ejortegau ejortegau marked this pull request as ready for review October 18, 2024 12:33
@frouioui frouioui removed NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsIssue A linked issue is missing for this Pull Request NeedsBackportReason If backport labels have been applied to a PR, a justification is required labels Oct 18, 2024
@frouioui frouioui self-requested a review October 18, 2024 18:52
Copy link
Member

@frouioui frouioui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to include an end-to-end test for this covering several cases: only 1 replica in the cluster, and more than one replica, all replicas being backed up, etc.

Moreover, even though this is not a breaking change per se, we should still document it in the v22.0 release notes. Which should be put in ./changelog/22.0/22.0.0/summary.md, the file does not exist yet.

Signed-off-by: Eduardo J. Ortega U <[email protected]>
While at it, also fix goimports in a couple of files.

Signed-off-by: Eduardo J. Ortega U <[email protected]>
Signed-off-by: Eduardo J. Ortega U <[email protected]>
@ejortegau
Copy link
Contributor Author

@GuptaManan100 's suggestion has been applied, please have another look

Copy link
Member

@deepthi deepthi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The implementation seems correct. My comments are mostly about naming things etc.

changelog/22.0/22.0.0/summary.md Outdated Show resolved Hide resolved
changelog/22.0/22.0.0/summary.md Outdated Show resolved Hide resolved
changelog/22.0/22.0.0/summary.md Outdated Show resolved Hide resolved
changelog/22.0/22.0.0/summary.md Outdated Show resolved Hide resolved
changelog/22.0/22.0.0/summary.md Outdated Show resolved Hide resolved
// returns it. It is safe to call from multiple goroutines.
func findPositionAndLagForTablet(ctx context.Context, tablet *topodatapb.Tablet, logger logutil.Logger, tmc tmclient.TabletManagerClient, waitTimeout time.Duration) (replication.Position, time.Duration, error) {
func findPositionLagBackingUpForTablet(ctx context.Context, tablet *topodatapb.Tablet, logger logutil.Logger, tmc tmclient.TabletManagerClient, waitTimeout time.Duration) (replication.Position, time.Duration, bool, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
func findPositionLagBackingUpForTablet(ctx context.Context, tablet *topodatapb.Tablet, logger logutil.Logger, tmc tmclient.TabletManagerClient, waitTimeout time.Duration) (replication.Position, time.Duration, bool, error) {
func findPositionLagAndBackupStatusForTablet(ctx context.Context, tablet *topodatapb.Tablet, logger logutil.Logger, tmc tmclient.TabletManagerClient, waitTimeout time.Duration) (replication.Position, time.Duration, bool, error) {

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can come up with some shorter function name for this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I settled for findTabletPositionLagBackupStatus(), I hope it's acceptable

go/vt/vtctl/reparentutil/util_test.go Outdated Show resolved Hide resolved
go/vt/vtctl/reparentutil/util_test.go Outdated Show resolved Hide resolved
go/vt/vtctl/reparentutil/util_test.go Show resolved Hide resolved
go/vt/vttablet/tabletmanager/rpc_backup.go Outdated Show resolved Hide resolved
Signed-off-by: Eduardo J. Ortega U <[email protected]>
Signed-off-by: Eduardo J. Ortega U <[email protected]>
Signed-off-by: Eduardo J. Ortega U <[email protected]>
auto code gen

Signed-off-by: Eduardo J. Ortega U <[email protected]>
Signed-off-by: Eduardo J. Ortega U <[email protected]>
@timvaillancourt
Copy link
Contributor

timvaillancourt commented Nov 5, 2024

@GuptaManan100 / @ejortegau: I can't see why this would change this behaviour, but I wanted to double check that this ERS behaviour remains true after this change:

Wait for the primary-elect to catch up to the most advanced replica, if it isn't already the most advanced.

To be specific, if the backing-up REPLICA is the most up-to-date in replication (we've had this happen with xtrabackup somehow), will the primary-elect catch up to the backing-up REPLICA that we likely ignored as a candidate?

And could any of the backup engines prevent this from happening? For example, some engines take out some heavy mysql-level locks 🤔

@GuptaManan100
Copy link
Member

The backuping replicas are still viable candidates. We don't remove them from the list even now. It is just an extra piece of information to use to sort the tablets. If we have 2 equally advanced tablets (in terms of position), then the one taking the backup will be lower down the list of candidates. That being said, if a backing up replica is the most advanced tablet there is, we will still promote it.

@timvaillancourt
Copy link
Contributor

timvaillancourt commented Nov 6, 2024

That being said, if a backing up replica is the most advanced tablet there is, we will still promote it.

@GuptaManan100 at least in our production this would be undesirable. We had this happen on one shard (the backup node was most advanced, surprisingly) and the overhead of the backup was enough to impact primary operations

I think it would be more ideal for the 2nd-most-current replica to be promoted in this scenario and whatever magic we have to "catch up to the most advanced replica" would be used to catch up to the backing up node

Without this behaviour I think we are open to the same scenario/risks that prompted this support to be added. cc @ejortegau for thoughts here

@GuptaManan100
Copy link
Member

If backup operations impact the tablet from being a primary at all, then we should then decide to never promote it. We can do that by filtering the candidates out in filterValidCandidates. We can remove the sorting changes as well. Just removing these tablets would be enough.

If we make this change, then the tablet will be used to get another tablet caught up, but it won't be elected itself. However, with these changes, if we find ourselves in a situation such that the only viable tablet is taking a backup, we will not promote it.

@ejortegau
Copy link
Contributor Author

I think this falls back to what behavior we want for the two different cases. Using filterValidCandates would work fine for ERS and is in fact what the original version of the PR used in that case.

This leaves open what should be done for PRS, since PRS logic does not have ideal source selection after picking up the most up to date one as far as I can see. We could do what I proposed here for that, i.e. fail the PRS with an error that allows the operator to decide whether they will cancel the backup or pick a different source.

Thoughts?

@GuptaManan100
Copy link
Member

@ejortegau Yes, we can go back to the original version, I think that would do what we want it to.

As far as PRS is concerned, the primary is online, there is no need for a multi-step promotion. Any candidate that we select, can be promoted. We wait for it to catch up to the primary. So, in PRS, it would be enough to remove these tablets from consideration in ElectNewPrimary function.

Have PRS remove hosts taking backups from consideration; and ERS only
consider them if there are no other valid candidates that are not taking
backups.

Signed-off-by: Eduardo J. Ortega U <[email protected]>
@@ -58,7 +58,8 @@ const (
// cell as the current primary, and to be different from avoidPrimaryAlias. The
// tablet with the most advanced replication position is chosen to minimize the
// amount of time spent catching up with the current primary. Further ties are
// broken by the durability rules.
// broken by the durability rules. Tablets taking backups are excluded from
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ejortegau I think this comment is no longer accurate

Copy link
Contributor Author

@ejortegau ejortegau Nov 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, it's still accurate. It reflects the fact that PRS will not promote a tablet that is taking a backup.

@ejortegau
Copy link
Contributor Author

ejortegau commented Nov 8, 2024

I have updated the PR to do the following:

  1. In the ERS case, filterValidCandidates() does not return hosts taking backups, unless there are no better candidates not taking backups. This means ERS can promote hosts taking backups, but only if there are no other candidates that are not.
  2. In the PRS case, ElectNewPrimary() excludes hosts taking backups. This means PRS can fail if there's only one good candidate and it's taking a backup.

Please have another look.

Copy link
Member

@GuptaManan100 GuptaManan100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just one comment change, rest looks good to me!

go/vt/vtctl/reparentutil/emergency_reparenter.go Outdated Show resolved Hide resolved
Signed-off-by: Eduardo J. Ortega U <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants