Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ArgoCD/Helm does not create BackupStorageLocation or VolumeSnapshotLocation resources. #646

Open
darnone opened this issue Feb 25, 2025 · 0 comments

Comments

@darnone
Copy link

darnone commented Feb 25, 2025

What steps did you take and what happened:
I am running a helm install with a configuration that has worked on the past but the backupstoragelocation and volumesnapshotlocation resources are not deployed. If I perform a helm template with my values files, the generate manifest are incorrect:

generated manifest:

---
# Source: velero/templates/backupstoragelocation.yaml
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
  name: default
  namespace: default
  labels:
    app.kubernetes.io/name: velero
    app.kubernetes.io/instance: my-velero
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: velero-8.4.0
spec:
  credential:
  provider: 
  accessMode: ReadWrite
  objectStorage:
    bucket:
---
# Source: velero/templates/volumesnapshotlocation.yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: default
  labels:
    app.kubernetes.io/name: velero
    app.kubernetes.io/instance: my-velero
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: velero-8.4.0
spec:
  credential:
  provider:

I you notice, the name is still default and the provider is empty.

values:

configuration:
  features: EnableCSI
  uploaderType: kopia
  backupStorageLocation:
  - name: velero-backup-storage-location
    bucket: <bucket-name>
    default: true
    provider: aws
    config:
      region: us-east-1
  volumeSnapshotLocation:
  - name: velero-volume-storage-location
    provider: aws
    config:
      region: us-east-1

What did you expect to happen:

I expected the backupstoragelocation and volumesnapshotlocation resources to be created

The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other pastebin is fine.)

log shows:

time="2025-02-25T22:11:11Z" level=warning msg="Velero node agent not found; pod volume backups/restores will not work until it's created" logSource="pkg/cmd/server/server.go:650"
time="2025-02-25T22:11:11Z" level=warning msg="Failed to set default backup storage location at server start" backupStorageLocation=default error="backupstoragelocations.velero.io "default" not found" logSource="pkg/cmd/server/server.go:492"

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

I am deploying this with argocd cd but get the same result with helm even though I have a running install of velero in another cluster with helmfile
Environment:

  • helm version (use helm version): v3.17.1
  • helm chart version and app version (use helm list -n <YOUR NAMESPACE>): chart:8.4.0, app: 1.15.2
  • Kubernetes version (use kubectl version): 1.30.9
  • Kubernetes installer & version: asdf 0.16.4
  • Cloud provider or hardware configuration: AWS EKS
  • OS (e.g. from /etc/os-release): Client Mac, AWS EKS node AWS optimized Linus 2
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant