-
Notifications
You must be signed in to change notification settings - Fork 381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix context issue during cleanup of kind clusters #6771
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -528,19 +528,28 @@ function destroy_external_servers { | |
|
||
function clean_kind { | ||
echo "=== Cleaning up stale kind clusters ===" | ||
read -a all_kind_clusters <<< $(kind get clusters) | ||
for kind_cluster_name in "${all_kind_clusters[@]}"; do | ||
creationTimestamp=$(kubectl get nodes --context kind-$kind_cluster_name -o json -l node-role.kubernetes.io/control-plane | \ | ||
jq -r '.items[0].metadata.creationTimestamp') | ||
LOCK_FILE="$THIS_DIR/.kube/config.lock" | ||
( | ||
flock -x 200 | ||
|
||
for context in $(kubectl config get-contexts -o name | grep 'kind-'); do | ||
cluster_name=$(echo $context | sed 's/^kind-//') | ||
creationTimestamp=$(kubectl get nodes --context $context -o json -l node-role.kubernetes.io/control-plane | \ | ||
jq -r '.items[0].metadata.creationTimestamp' || true) | ||
if [[ -z $creationTimestamp ]]; then | ||
continue | ||
fi | ||
creation=$(printUnixTimestamp "$creationTimestamp") | ||
now=$(date -u '+%s') | ||
diff=$((now-creation)) | ||
timeout=$(($UNTIL_TIME_IN_MINS*60)) | ||
if [[ $diff -gt $timeout ]]; then | ||
echo "=== kind ${kind_cluster_name} present from more than $UNTIL_TIME_IN_MINS minutes ===" | ||
kind delete cluster --name $kind_cluster_name | ||
echo "=== Cluster ${cluster_name} is present for more than $UNTIL_TIME_IN_MINS minutes ===" | ||
kind delete cluster --name $cluster_name | ||
fi | ||
done | ||
done | ||
)200>>"$LOCK_FILE" | ||
rm -rf $LOCK_FILE | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this is not a good idea IMO. It feels like there can be a race condition where we delete the file while another job is holding the lock? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. But untill we delete config.lock the other process will not be able to acquire the lock and it may panic. before invoking @antoninbas what's your opinion on this? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We should not use
I think both approaches will work |
||
} | ||
|
||
if ! command -v kind &> /dev/null | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jainpulkit22 I'm a bit confused. I first I thought this would work because
.kube/config.lock
was also used bykind
/kubectl
. See https://github.com/kubernetes-sigs/kind/blob/1c5a56b30145d98a61c0538975ca0894ad65e0d0/pkg/cluster/internal/kubeconfig/internal/kubeconfig/lock.go#L27But then I ran some tests and it seems to me that
flock
and the Go code used to acquire the lock (os.OpenFile(lockName(filename), os.O_CREATE|os.O_EXCL, 0)
) are not "compatible". For example, a bash script using flock to acquire an exclusive lock using~/.kube/config.lock
can execute concurrently which akind
command that acquires an excluisve lock on the same file using the code I linked to above.This is the code I used for testing:
test.go
test.sh
Could you confirm that you tested this approach successfully? Maybe there is an issue in my test programs.
If the 2 locking schemes are not compatible, I don't think this patch will have the intended effect, as we can have a concurrent call to
kind create cluster
which will mutate the Kubeconfig file?It also seems that because
flock
will create the file and not delete it,kind
will fail to acquire the lock later (file already exists). I tried and got this error fromkind
:Finally, even if the locking schemes were compatible, I am not sure that the current version would be sufficient, as
kind create cluster
will only acquire the lock while it's writing the config: therefore, maybe we could still have a context in the config for which the control plane is not ready yet?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have explicitly added the command to remove the lock file, other than that it seems compatible to me, because when I ried locally b oth the scripts together the create script waited for the lock to be released by the delete script.
Regarding your second question that we could still have a context in the config for which the control plaane is not ready yet, taht won't be a problem for use because the output of creation timeSTamp is piped to true, so the script won't panic and will run as usual.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please share your steps, this is not what I observe in my case.
test.sh
andtest.go
can execute concurrently (test.sh
is supposed to represent the clean up step inkind-setup.sh
here, using the sameflock
command).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't use test.go for my local testing I did the following:
I created a test.sh script similar to the above one and executed
kind create cluster
command in one shell andtest.sh
script in another cell, and there I din't got any error.I did this because in actual test scenario also we are just making calls to
kind create cluster
command.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That doesn't show anything.
You would need the following test perhaps:
test.sh
with a long sleep while the exclusive lock is held (a few minutes maybe)kind create cluster
in a second terminalIf
kind create cluster
can succeed beforetest.sh
releases the lock, then you know it's not working.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested the same with along sleep,
kind create cluster
command didn't succeed before test.sh but the problem was that it didn't even waited for the lock to be release and it just exited.I agree with you that flock does not seem to be compatible with the GO locking mechanism.
One solution which i can see could work here is that before invoking any kind create command or delete command we can use flock so then flock would wait for the lock to be released and then we can run the kind commands successfully.
Whats your suggestion on this @antoninbas
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, you would get the
ERROR: failed to create cluster: failed to lock config file: open /Users/abas/.kube/config.lock: file exists
error. It's a different mechanism fromflock
. In Kind the presence of the lock file means that someone else owns the config file, and it will trigger an immediate failure. Acquiring the lock simply means creating the file, and releasing the lock means deleting the file.flock
will also not remove the file when the lock is released, so Kind will never succeed onceflock
is used once.My suggestion is still to have our own lock file as I initially proposed (I also still think it may be a good idea to have our own
.clusters
file), and wrap all Kind commands withflock
. This seems to be what you are describing as well. Trying to use.kube/config.lock
withflock
is guaranteed to fail by preventing Kind from ever succeeding.