You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the PVC and the volume are created, it references the snapshot on node worker1 and allocates no additional space but although the same snapshot exists on node worker2, Linstor replicates the newly created volume on worker2 without any reference to the existing snapshot and start syncing this volume and allocate 10G of space only on node worker2.
I couldn't find documentation explaining this behavior. Why the volume on node worker2 can not reference the existing snapshot without allocating more space just like node worker1?
The text was updated successfully, but these errors were encountered:
I guess this is a limitation of the current implementation: We only restore on one node, the other node will create a regular "emtpy" LV instead.
We do it this way because we don't really want to make scheduling decisions in Piraeus: this should be left to LINSTOR. In theory the new volume could be part of a different resource group, with different placement constraints. I guess one could add a special case for that in LINSTOR CSI, but that is a low priority issue.
I have the following manifests on my Kubernetes clusting including two nodes using versions 1.10 of Piraeus
StorageClass:
VolumeSnapshotClass:
VolumeSnapshot:
PVC from the snapshot:
When the PVC and the volume are created, it references the snapshot on node
worker1
and allocates no additional space but although the same snapshot exists on nodeworker2
, Linstor replicates the newly created volume onworker2
without any reference to the existing snapshot and start syncing this volume and allocate 10G of space only on nodeworker2
.I couldn't find documentation explaining this behavior. Why the volume on node
worker2
can not reference the existing snapshot without allocating more space just like nodeworker1
?The text was updated successfully, but these errors were encountered: