-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add testcase for scaling-in while 3-node cluster having 90% storage utilization #9131
Comments
@pehala This scenario seems to be incorrect. Without deletes you will hit |
But in this scenario you perform scale out before scale in. So, if I understand correctly it is add node 4, then remove node 3 so in practice swap node 3 to 4. |
I updated this description a bit. Based on the suggestion on testplan document, we have two variant for scale-in. a) 3node-cluster scale-in at 90% b) 4node-cluster scale-in at 67%. For 3node-cluster scale-in at 90%, add a new node once tablet migration completed. Drop 20% of data from the cluster and then scale-in by removing a node. |
reached 92% disk usage and started waiting for 30mins, no write or read.
After 30min idle time, started throttled write:
Scaleout by adding a new node at 90%
Later, dropping some data before scale-in
few minutes later, removing a node from 3-node cluster.
Latency
Final 3node cluster has disk usage at 92%,91% and 87% https://argus.scylladb.com/tests/scylla-cluster-tests/1ffa6d64-004a-4443-a3c9-d52a18ea08e1 |
But if dropping 20% of data as suggested in the test plan, should not we get ca. 70% here? It was incorrectly stated in the doc - I fixed it. The idea behind it is to simulate the scenario where we loose plenty of data and because of it we can scale in to save resources. |
@Lakshmipathi ping |
If I'm not wrong, the throttled write we do during scaling operation (3 and 8) - contributes to addition disk usage. Let me add more graphs to this issue. |
Merged into #9156 |
The text was updated successfully, but these errors were encountered: