Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why the volume from a snapshot does not reference the related snapshot on all nodes? #766

Open
Abbas-b-b opened this issue Jan 26, 2025 · 1 comment

Comments

@Abbas-b-b
Copy link

I have the following manifests on my Kubernetes clusting including two nodes using versions 1.10 of Piraeus

StorageClass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: linstor
provisioner: linstor.csi.linbit.com
allowVolumeExpansion: true
parameters:
  csi.storage.k8s.io/fstype: ext4
  linstor.csi.linbit.com/resourceGroup: "rg-default"
  linstor.csi.linbit.com/storagePool: "lvm-thin"
  linstor.csi.linbit.com/nodeList: "worker1 worker2"
  property.linstor.csi.linbit.com/DrbdOptions/Net/protocol: "A"

VolumeSnapshotClass:

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: linstor
driver: linstor.csi.linbit.com
deletion polic: Delete

VolumeSnapshot:

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: test-snapshot
spec:
  volumeSnapshotClassName: linstor
  source:
    persistentVolumeClaimName: test-pvc

PVC from the snapshot:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-fromsnapshot
spec:
  storageClassName: linstor
  dataSource:
    name: test-snapshot
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

When the PVC and the volume are created, it references the snapshot on node worker1 and allocates no additional space but although the same snapshot exists on node worker2, Linstor replicates the newly created volume on worker2 without any reference to the existing snapshot and start syncing this volume and allocate 10G of space only on node worker2.

I couldn't find documentation explaining this behavior. Why the volume on node worker2 can not reference the existing snapshot without allocating more space just like node worker1?

@WanzenBug
Copy link
Member

I guess this is a limitation of the current implementation: We only restore on one node, the other node will create a regular "emtpy" LV instead.

We do it this way because we don't really want to make scheduling decisions in Piraeus: this should be left to LINSTOR. In theory the new volume could be part of a different resource group, with different placement constraints. I guess one could add a special case for that in LINSTOR CSI, but that is a low priority issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants