Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(helm): update rook-ceph-cluster ( v1.16.2 → v1.16.4 ) #846

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Feb 5, 2025

This PR contains the following updates:

Package Update Change
rook-ceph-cluster patch v1.16.2 -> v1.16.4

Release Notes

rook/rook (rook-ceph-cluster)

v1.16.4

Compare Source

Improvements

Rook v1.16.4 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.16.3

Compare Source

Improvements

Rook v1.16.3 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.


Configuration

📅 Schedule: Branch creation - "* 0-4,22-23 * * 1-5,* * * * 0,6" in timezone America/Los_Angeles, Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

Copy link

github-actions bot commented Feb 5, 2025

--- kubernetes/apps/rook-ceph/cluster/app Kustomization: flux-system/rook-ceph-cluster HelmRelease: rook-ceph/rook-ceph-cluster

+++ kubernetes/apps/rook-ceph/cluster/app Kustomization: flux-system/rook-ceph-cluster HelmRelease: rook-ceph/rook-ceph-cluster

@@ -13,13 +13,13 @@

     spec:
       chart: rook-ceph-cluster
       sourceRef:
         kind: HelmRepository
         name: rook-ceph
         namespace: flux-system
-      version: v1.16.2
+      version: v1.16.4
   driftDetection:
     mode: disabled
   install:
     remediation:
       retries: 3
   interval: 30m

@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from ebee10b to 9b780d6 Compare February 6, 2025 22:34
@jfroy jfroy force-pushed the main branch 2 times, most recently from b9652be to 599ca5f Compare February 20, 2025 00:50
@renovate renovate bot changed the title fix(helm): update rook-ceph-cluster ( v1.16.2 → v1.16.3 ) fix(helm): update rook-ceph-cluster ( v1.16.2 → v1.16.4 ) Feb 20, 2025
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 9b780d6 to 506fa08 Compare February 20, 2025 21:34
Copy link

--- HelmRelease: rook-ceph/rook-ceph-cluster Deployment: rook-ceph/rook-ceph-tools

+++ HelmRelease: rook-ceph/rook-ceph-cluster Deployment: rook-ceph/rook-ceph-tools

@@ -16,13 +16,13 @@

       labels:
         app: rook-ceph-tools
     spec:
       dnsPolicy: ClusterFirstWithHostNet
       containers:
       - name: rook-ceph-tools
-        image: quay.io/ceph/ceph:v19.2.0
+        image: quay.io/ceph/ceph:v19.2.1
         command:
         - /bin/bash
         - -c
         - |
           # Replicate the script from toolbox.sh inline so the ceph image
           # can be run directly, instead of requiring the rook toolbox
--- HelmRelease: rook-ceph/rook-ceph-cluster CephCluster: rook-ceph/rook-ceph

+++ HelmRelease: rook-ceph/rook-ceph-cluster CephCluster: rook-ceph/rook-ceph

@@ -9,13 +9,13 @@

     enabled: true
   annotations:
     mgr:
       dummy: 'true'
   cephVersion:
     allowUnsupported: false
-    image: quay.io/ceph/ceph:v19.2.0
+    image: quay.io/ceph/ceph:v19.2.1
   cleanupPolicy:
     allowUninstallWithVolumes: false
     confirmation: ''
     sanitizeDisks:
       dataSource: zero
       iteration: 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants