You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 26, 2020. It is now read-only.
I tried deleting 250 PVCs in a sequential way using below script,
for i in {1..250};do kubectl -n gcs delete pvc pvc$i;done
Below are the observations,
*. All the PVCs are deleted without any issues as expected.
*. Failed to delete 143 PVs, status for each PV shows as Released
*. On glustercli, it failed to delete 146 gluster volumes and are in stopped state.
Expected/desired behavior
PVC delete should delete PVC, PV and teardown gluster volumes.
Details on how to reproduce (minimal and precise)
Create a 3 node GCS setup using vagrant.
Create 250 PVCs.
Delete all the PVCs using below script,
for i in {1..250};do kubectl -n gcs delete pvc $i;done
Information about the environment:
Glusterd2 version used (e.g. v4.1.0 or master): v6.0-dev.114.gitd51f60b
Operating system used: Centos 7.6
Glusterd2 compiled from sources, as a package (rpm/deb), or container:
Using External ETCD: (yes/no, if yes ETCD version): yes; etcd Version: 3.3.8
If container, which container image:
Using kubernetes, openshift, or direct install:
If kubernetes/openshift, is gluster running inside kubernetes/openshift or outside: Kubernetes
Observed behavior
I tried deleting 250 PVCs in a sequential way using below script,
for i in {1..250};do kubectl -n gcs delete pvc pvc$i;done
Below are the observations,
*. All the PVCs are deleted without any issues as expected.
*. Failed to delete 143 PVs, status for each PV shows as Released
*. On glustercli, it failed to delete 146 gluster volumes and are in stopped state.
Expected/desired behavior
PVC delete should delete PVC, PV and teardown gluster volumes.
Details on how to reproduce (minimal and precise)
for i in {1..250};do kubectl -n gcs delete pvc $i;done
Information about the environment:
Attaching csi and gluster provisioner logs:
gluster-provisioner-logs.txt
csi-provisioner logs.txt
The text was updated successfully, but these errors were encountered: