Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multiple snapshot classes per driver #117

Closed
wants to merge 1 commit into from

Conversation

blampe
Copy link

@blampe blampe commented Aug 29, 2022

The plugin currently only supports one VolumeSnapshotClass via the "velero.io/csi-volumesnapshot-class" label. If a driver has more than one VolumeSnapshotClass with this label, there is no way to control which one will be used for backups.

It's not unusual to have multiple snapshot classes -- e.g. one for in-cluster snapshots, another for off-site backups -- and it would be enormously convenient to be able to use the CSI plugin to drive backups for all such classes.

To that end, this change introduces a new label "velero.io/csi-volumesnapshot-location" applied to VolumeSnapshotClass. If the value of this label matches one of the VolumeSnapshotLocations used by the backup, then that VolumeSnapshotClass will be used. This allows the user to control which VolumeSnapshotClasses is used for a particular driver based on their choice of volume snapshot location.

Existing behavior is unchanged.

@reasonerjt
Copy link
Contributor

 If a driver has more than one VolumeSnapshotClass with this label, there is no way to control which one will be used for backups.

This should not happen, b/c the label is set to control which one will be used for backups of velero?

@blampe
Copy link
Author

blampe commented Sep 16, 2022

This should not happen, b/c the label is set to control which one will be used for backups of velero?

@reasonerjt it's just a label, and the same label can be applied to any number of snapshot classes. Are you asking why someone would want multiple snapshot classes to be tagged with this label? I address that in the next sentence:

It's not unusual to have multiple snapshot classes -- e.g. one for in-cluster snapshots, another for off-site backups -- and it would be enormously convenient to be able to use the CSI plugin to drive backups for all such classes.

Restricting Velero to one snapshot class per driver is unnecessarily restrictive.

Example: I'd like to schedule daily backups for in-cluster snapshots, weekly backups for in-cluster object storage, and monthly off-site backups. Each of those scenarios has its own VolumeSnapshotClass. Currently, Velero can only satisfy one of these scenarios; this PR allows it to support all three.

@iosifnicolae2
Copy link

iosifnicolae2 commented Dec 21, 2022

Wouldn't be a better solution to specify VolumeSnapshotClass by passing volumeSnapshotClass to velero.io/v1 - VolumeSnapshotLocation?

@blampe @reasonerjt wdyt?

@iosifnicolae2
Copy link

Also, it seems that volumeSnapshotLocations from Backup CRD is not used at all by this plugin, right?

@sseago
Copy link
Collaborator

sseago commented Dec 22, 2022

VolumeSnapshotLocations are used by VolumeSnapshotter plugins (aws, azure, gcp). The CSI plugin does not use them, since the CSI plugin doesn't interact with out-of-cluster cloud resources.

@jkroepke
Copy link

Hi @reasonerjt! I would also like to share an other use case.

In our setup, all our snapshots requires tags to identity the source of the snapshots. By default, the CSI Snapshot driver to not create any tags.

We need a dedicated SnapshotClass for each Namespace/Customer, like:

apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
  name: csi-azuredisk-vsc-inc-1000100
driver: disk.csi.azure.com
deletionPolicy: Delete
parameters:
  tags: 'mandantID=1000100'

For example, 1000100 is a reference.

In conclusion, we have multiple VolumeSnapshotClasses for one CSI StorageClass. Currently, I'm not able to define the snapshop class in velero. Maybe an annotation on the PVC are the rescue here for the future.

@anshulahuja98
Copy link
Collaborator

@blampe you can consider closing this PR since this is being addressed in #178

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants