-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Example is not working in k8s 1.21 #39
Comments
hi @itek09 Could you please show the CR you are using and the resulting generated statefulset? |
Hello, i have the same issue. The resulting yaml has - when I enable metrics the wrong image - so it used the redis_exporter image as image for the redis (server) container - so the command 'redis-cli' can't found. I check everything and my images in cr are on the right place. apiVersion: v1
kind: Pod
metadata:
annotations:
cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
cni.projectcalico.org/containerID: e9883e0f0e132ad66f938a9763f1d6e3a0b1eb9bc7f41895385f819a6dfd0cc7
cni.projectcalico.org/podIP: 10.42.244.150/32
cni.projectcalico.org/podIPs: 10.42.244.150/32
seccomp.security.alpha.kubernetes.io/pod: runtime/default
creationTimestamp: "2021-10-22T22:29:51Z"
generateName: redis-redis-cluster-01-prod-
labels:
app.kubernetes.io/cr-name: redis-cluster-01-prod
app.kubernetes.io/instance: redis-cluster-01-prod
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: redis-cr
app.kubernetes.io/section: cr
argocd.argoproj.io/instance: redis-cluster-01-prod
controller-revision-hash: redis-redis-cluster-01-prod-67b669457c
helm.sh/chart: redis-cr-0.1.2
redis: redis-cluster-01-prod
statefulset.kubernetes.io/pod-name: redis-redis-cluster-01-prod-0
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:cluster-autoscaler.kubernetes.io/safe-to-evict: {}
f:seccomp.security.alpha.kubernetes.io/pod: {}
f:generateName: {}
f:labels:
.: {}
f:app.kubernetes.io/cr-name: {}
f:app.kubernetes.io/instance: {}
f:app.kubernetes.io/managed-by: {}
f:app.kubernetes.io/name: {}
f:app.kubernetes.io/section: {}
f:argocd.argoproj.io/instance: {}
f:controller-revision-hash: {}
f:helm.sh/chart: {}
f:redis: {}
f:statefulset.kubernetes.io/pod-name: {}
f:ownerReferences:
.: {}
k:{"uid":"d7a6f812-cffd-4012-9dbc-85459ffc0572"}:
.: {}
f:apiVersion: {}
f:blockOwnerDeletion: {}
f:controller: {}
f:kind: {}
f:name: {}
f:uid: {}
f:spec:
f:affinity:
.: {}
f:podAntiAffinity:
.: {}
f:requiredDuringSchedulingIgnoredDuringExecution: {}
f:containers:
k:{"name":"redis"}:
.: {}
f:args: {}
f:image: {}
f:imagePullPolicy: {}
f:livenessProbe:
.: {}
f:exec:
.: {}
f:command: {}
f:failureThreshold: {}
f:initialDelaySeconds: {}
f:periodSeconds: {}
f:successThreshold: {}
f:timeoutSeconds: {}
f:name: {}
f:readinessProbe:
.: {}
f:exec:
.: {}
f:command: {}
f:failureThreshold: {}
f:initialDelaySeconds: {}
f:periodSeconds: {}
f:successThreshold: {}
f:timeoutSeconds: {}
f:resources:
.: {}
f:limits:
.: {}
f:cpu: {}
f:memory: {}
f:requests:
.: {}
f:cpu: {}
f:memory: {}
f:securityContext:
.: {}
f:allowPrivilegeEscalation: {}
f:capabilities:
.: {}
f:drop: {}
f:readOnlyRootFilesystem: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:volumeMounts:
.: {}
k:{"mountPath":"/config/redis.conf"}:
.: {}
f:mountPath: {}
f:name: {}
f:readOnly: {}
f:subPath: {}
k:{"mountPath":"/data"}:
.: {}
f:mountPath: {}
f:name: {}
f:workingDir: {}
f:dnsPolicy: {}
f:enableServiceLinks: {}
f:hostname: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext:
.: {}
f:fsGroup: {}
f:runAsGroup: {}
f:runAsNonRoot: {}
f:runAsUser: {}
f:subdomain: {}
f:terminationGracePeriodSeconds: {}
f:volumes:
.: {}
k:{"name":"redis-cluster-01-prod-data"}:
.: {}
f:name: {}
f:persistentVolumeClaim:
.: {}
f:claimName: {}
k:{"name":"redis-redis-cluster-01-prod-config"}:
.: {}
f:configMap:
.: {}
f:defaultMode: {}
f:name: {}
f:name: {}
manager: kube-controller-manager
operation: Update
time: "2021-10-22T22:29:51Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:cni.projectcalico.org/containerID: {}
f:cni.projectcalico.org/podIP: {}
f:cni.projectcalico.org/podIPs: {}
manager: calico
operation: Update
time: "2021-10-22T22:30:03Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"ContainersReady"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Initialized"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
k:{"type":"Ready"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:containerStatuses: {}
f:hostIP: {}
f:phase: {}
f:podIP: {}
f:podIPs:
.: {}
k:{"ip":"10.42.244.150"}:
.: {}
f:ip: {}
f:startTime: {}
manager: kubelet
operation: Update
time: "2021-10-22T22:30:05Z"
name: redis-redis-cluster-01-prod-0
namespace: redis
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: StatefulSet
name: redis-redis-cluster-01-prod
uid: d7a6f812-cffd-4012-9dbc-85459ffc0572
resourceVersion: "105526154"
uid: 27bc4da9-617c-4e33-8da9-d3b6d5c259e5
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/cr-name
operator: In
values:
- redis-cluster-01-prod
- key: helm.sh/chart
operator: In
values:
- redis-cr-0.1.2
- key: app.kubernetes.io/instance
operator: In
values:
- redis-cluster-01-prod
topologyKey: kubernetes.io/hostname
containers:
- args:
- /config/redis.conf
image: docker.io/oliver006/redis_exporter:v1.29.0-alpine
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- redis-cli
- ping
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: redis
readinessProbe:
exec:
command:
- redis-cli
- ping
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 50m
memory: 100Mi
requests:
cpu: 50m
memory: 100Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
readOnlyRootFilesystem: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /config/redis.conf
name: redis-redis-cluster-01-prod-config
readOnly: true
subPath: redis.conf
- mountPath: /data
name: redis-cluster-01-prod-data
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-8tb4q
readOnly: true
workingDir: /data
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostname: redis-redis-cluster-01-prod-0
imagePullSecrets:
- name: image-pull-secret
nodeName: docker04
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 7777777
runAsGroup: 7777777
runAsNonRoot: true
runAsUser: 7777777
seccompProfile:
type: RuntimeDefault
serviceAccount: default
serviceAccountName: default
subdomain: redis-redis-cluster-01-prod-headless
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: redis-cluster-01-prod-data
persistentVolumeClaim:
claimName: redis-cluster-01-prod-data-redis-redis-cluster-01-prod-0
- configMap:
defaultMode: 420
name: redis-redis-cluster-01-prod
name: redis-redis-cluster-01-prod-config
- name: kube-api-access-8tb4q
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace |
Hi @Jonas18175 could you please paste the Redis object here as well? |
That was the only container that was created - i don't have an other object. @nrvnrvn |
Running into this issue as well on Kubernetes version 1.21. Setting the liveness and readiness probe Is there any way to globally overwrite the value for EDIT: to clarify, the same setting in version 1.19 doesn't seem to be an issue for the applications I am testing. |
superseded by nrvnrvn/k8dis#1 @heywoodlh it will be possible to set |
Hello!
First, I have to say thank you for the effort on this solution.
I'm trying to deploy a new cluster over k8s 1.21 but the pods are not able to start. Seems that the liveness and Readiness are failing over the redis container. I was able to start the pods removing the checking configuration, so the pod is able to start (starting redis correctly), but the operator is not able to set a master (the "error":"minimum replication size is not met, only 0 are healthy" message is not appearing anymore).
Please let me know if I can give you any logs or help you anyhow.
The text was updated successfully, but these errors were encountered: