Skip to content

Latest commit

 

History

History
161 lines (122 loc) · 6.55 KB

README.md

File metadata and controls

161 lines (122 loc) · 6.55 KB

fleet-examples

  • Simple a deployment + service (x86_64 & arm64)
kubectl apply -f https://raw.githubusercontent.com/suse-edge/fleet-examples/main/gitrepos/general/simple-gitrepo.yaml
kubectl apply -f https://raw.githubusercontent.com/suse-edge/fleet-examples/main/gitrepos/general/akri-suse-edge-gitrepo.yaml
kubectl apply -f https://raw.githubusercontent.com/suse-edge/fleet-examples/main/gitrepos/general/kubevirt-suse-edge-gitrepo.yaml
kubectl apply -f https://raw.githubusercontent.com/suse-edge/fleet-examples/main/gitrepos/general/metallb-suse-edge-gitrepo.yaml
kubectl apply -f https://raw.githubusercontent.com/suse-edge/fleet-examples/main/gitrepos/general/elemental-gitrepo.yaml
kubectl apply -f https://raw.githubusercontent.com/suse-edge/fleet-examples/main/gitrepos/general/opni-gitrepo.yaml
kubectl apply -f https://raw.githubusercontent.com/suse-edge/fleet-examples/main/gitrepos/general/longhorn-gitrepo.yaml

A few notes about this example:

  • Longhorn creates its own storageclass and if using K3s default configuration you can end up with two default storageclasses:
$ kubectl get sc
NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  26m
longhorn (default)     driver.longhorn.io      Delete          Immediate              true                   2s

To make the longhorn one the default, you can remove the is-default-class annotation from the local-path one as:

kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'

If you want to remove the annotation to the longhorn one, you need to tweak the Helm parameters as:

persistence.defaultClass=false
  • The Longhorn UI is not exposed by default. If you want to expose it, you need to specify a couple of Helm values such as:
ingress:
  enabled: true
  host: "longhorn-example.com"

You can modify the Longhorn's fleet.yaml file to fit your needs.

  • You can configure Fleet to read Helm custom values in a configmap created somewhere in the cluster such as:
  valuesFrom:
  - configMapKeyRef:
      name: longhorn-chart-values
      # default to namespace of bundle
      namespace: fleet-local
      key: values.yaml

Basically you can create a configmap there with the values.yaml content you want to provide. This is not restricted to ingress but anything included in the Longhorn Helm Chart values.yaml can be used:

cat <<- EOF | kubectl apply -f -
apiVersion: v1
data:
  values.yaml: |
    ingress:
      enabled: true
      host: "longhorn-example.com"
kind: ConfigMap
metadata:
  name: longhorn-chart-values
  namespace: fleet-local
EOF
  • The Fleet included here contains a customization such as:
targetCustomizations:
  # Customization Name
- name: local
  # If the local cluster is used
  clusterSelector:
    matchLabels:
      management.cattle.io/cluster-display-name: local
  helm:
    values:
      ingress:
        # Use this custom Helm values
        enabled: true
        # This is a manual annotation that needs to be set in the clusters.fleet.cattle.io/local object
        host: longhorn-${ .ClusterAnnotations.ingressip }.sslip.io
        # This annotation will enable user/password authentication for the Longhorn UI
        annotations:
          traefik.ingress.kubernetes.io/router.middlewares: longhorn-system-longhorn-basic-auth@kubernetescrd
  # This kustomization will create the required objects for the user/password authentication
  kustomize:
    dir: overlays/local

This means:

  • If using a local cluster
  • If the Traefik Ingress controller is deployed
  • If the Traefik Ingress uses sslip.io
  • If the local cluster has been annotated with the Ingress IP:

kubectl annotate clusters.fleet.cattle.io/local -n fleet-local "ingressip=$(kubectl get svc -n kube-system traefik -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

It will enable the Longhorn UI protected via user/password using a kustomization overlay

This is basically intended to be used with the create-vm.sh script as:

./create_vm.sh -f myvm
export KUBECONFIG=$(./get_kubeconfig.sh -f myvm -w)
kubectl annotate clusters.fleet.cattle.io/local -n fleet-local "ingressip=$(kubectl get svc -n kube-system traefik -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"
kubectl apply -f https://raw.githubusercontent.com/suse-edge/fleet-examples/main/gitrepos/general/longhorn-gitrepo.yaml

NOTE: Due to rancher/fleet#1507, this needs to be done before applying the longhorn gitrepo:

helm -n cattle-fleet-system upgrade --create-namespace fleet-crd https://github.com/rancher/fleet/releases/download/v0.7.0/fleet-crd-0.7.0.tgz
helm -n cattle-fleet-system upgrade --create-namespace fleet https://github.com/rancher/fleet/releases/download/v0.7.0/fleet-0.7.0.tgz
  • To uninstall the application, it is required to set the deleting-confirmation-flag to true as per the instructions before removing the Helm chart or the gitrepo object:
kubectl -n longhorn-system patch -p '{"value": "true"}' --type=merge lhs deleting-confirmation-flag