-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch manager tests to run on singleDC environment #7435
Switch manager tests to run on singleDC environment #7435
Conversation
Since there is an issue with multiDC cluster restore when the EaR is turned on (scylladb/scylla-manager#3829), it was decided to temporarily switch the main part of jobs to run on singleDC cluster. Only one multiDC cluster job is left for enterprise version 2022 where EaR is not implemented.
The test is valid only for multiDC configuration. Otherwise, it should be skipped.
@@ -5,9 +5,9 @@ def lib = library identifier: 'sct@snapshot', retriever: legacySCM(scm) | |||
|
|||
managerPipeline( | |||
backend: 'aws', | |||
region: '''["us-east-1", "us-west-2"]''', | |||
region: 'us-west-2', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's on purpose you are putting each case on a different region ?
(if you can, you can even use random
, I've fixed to be working a while back)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I randomly chose one of the regions supported before in multiDC runs: "us-east-1" or "us-west-2"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to explain, setting the region as random, can help a bit with the spot issues, since tests would spread to all of the supported regions, and won't all run on the exact same ones
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@@ -349,9 +349,8 @@ def test_manager_sanity(self): | |||
self.test_mgmt_cluster_crud() | |||
with self.subTest('Mgmt cluster Health Check'): | |||
self.test_mgmt_cluster_healthcheck() | |||
# test_healthcheck_change_max_timeout requires a multi dc run. And since ipv6 cannot run in multi dc, this test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good that this comment is going away, cause the ipv6 part of it, is alway wrong. we can use ipv6 on AWS with multi-dc cases.
Sanity tests on debian11, ubuntu22 and ubuntu24 switched to be run on multiDC cluster. This configuration was already in place some time ago before issue scylladb/scylla-manager#3871 was found. After that Manager jobs were switched to run only on singleDC cluster (scylladb#7435). And since the fix for Manager is ready now, multiDC setup can be brought back.
Sanity tests on debian11, ubuntu22 and ubuntu24 switched to be run on multiDC cluster. This configuration was already in place some time ago before issue scylladb/scylla-manager#3871 was found. After that Manager jobs were switched to run only on singleDC cluster (scylladb#7435). And since the fix for Manager is ready now, multiDC setup can be brought back.
Closes scylladb/scylla-manager#3850
Should be merged together with #7365
Since there is an issue with multiDC cluster restore when the EaR is turned on (scylladb/scylla-manager#3829), it was decided to:
After scylladb/scylla-manager#3829 resolution, multiDC cluster would be returned back.
Testing
PR pre-checks (self review)
backport
labelsReminders
sdcm/sct_config.py
)unit-test/
folder)