-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade db packages stops all nodes when growing cluster by 3 in parallel (custom db packages) #8551
Comments
@soyacz this logic can go, we don't care about the ordering of starting nodes anymore, we can remove that if, and remove all the else branch we should just stop/stop the node that are being asked, we shouldn't touch any other nodes at that point, it's a mistake |
Yes, shouldn't be hard to fix, let's plan it for this sprint. |
When updating multiple nodes db packages, SCT stops all the nodes. This is wrong and causes tests to fail when providing `update_db_packages` param. Fix by removing broken logic and dropping code for nodes stop ordering. fixes: scylladb#8551
A test with custom scylla db packages (
update_db_packages
param set).When growing cluster by 3 in parallel, SCT stops all the nodes instead of only added ones.
Culprit line:
scylla-cluster-tests/sdcm/cluster.py
Line 4220 in 066dd02
Impact
Fail the test due c-s errors when stopping all the nodes.
How frequently does it reproduce?
Always when growing in parallel and using custom db packages.
Installation details
Cluster size: 3 nodes (i4i.2xlarge)
Scylla Nodes used in this run:
OS / Image:
ami-0415b87a177bf40a6
(aws: undefined_region)Test:
scylla-enterprise-perf-regression-latency-650gb-elasticity
Test id:
bc75f3a1-389f-4c3e-a84f-ef388d9bd03c
Test name:
scylla-staging/lukasz/scylla-enterprise-perf-regression-latency-650gb-elasticity
Test method:
performance_regression_test.PerformanceRegressionTest.test_latency_mixed_with_nemesis
Test config file(s):
Logs and commands
$ hydra investigate show-monitor bc75f3a1-389f-4c3e-a84f-ef388d9bd03c
$ hydra investigate show-logs bc75f3a1-389f-4c3e-a84f-ef388d9bd03c
Logs:
Jenkins job URL
Argus
The text was updated successfully, but these errors were encountered: