-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for securityContext for pods #30
Comments
@lvikstro I get the following error even when I speficy runAsUser 0. Are you sure it works? 🤔 securityContext:
runAsGroup: 0
runAsNonRoot: true
runAsUser: 0
fsGroup: 2000
|
Hi there @lvikstro, The security context does work, since this is not the responsibility of the driver, but Kubernetes, but in order for it to work, I had to configure some things:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: normal
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.san.synology.com
# if all params are empty, synology CSI will choose an available location to create volume
parameters:
dsm: "<dsmip>"
location: /volume<n>
csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete
allowVolumeExpansion: true This combination fixed it for me |
Hello. apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: synology-iscsi-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.san.synology.com
parameters:
dsm: '192.168.16.240'
location: '/volume1'
csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Retain
allowVolumeExpansion: true This is my SC spec but still doesn't work for me .-. anything in your mind? |
I am also facing this issue. statefulset definition
Storage class definition
These are logs from postgresql when using this configuration
Anyway, if I change directory to be emptydir (instead of pvc from synology nfs) it works and I can verify that owner is 1001 (which I have set in securityContext)
Owner of the directory
|
Depending on the K8s version you are using, there is a problem with the DelegateFSGroupToCSIDriver feature gate. This is enabled by default in starting with K8s 1.23. Normally, the kubelet is responsible for "fulfilling" the The synology csi driver declares that it is able to do that, but just isn't doing it. The quick workaround for this, is to disable this feature gate and always let the kubelet do that. The solution for this would be for the csi driver either to not declare this capability, or for it to actually work. |
I am also running into this issue with OpenShift 4.11 (based on k8s 1.24). CSI driver provisions and mounts the volume with no issues, but pod instantiation always fails with Permission Denied. |
kubelet isn't doing the chown for NFS volumes. You'll have to use an init container for that |
@rblundon Do you have a bit more info? Maybe the storageclass you're using and the podspec |
Disabling DelegateFSGroupToCSIDriver worked perfectly for me btw. Thank you so much! |
How to disable DelegateFSGroupToCSIDriver on a existing cluster? |
@Ryanznoco Hi, it depends on what Kubernetes distribution you use but you need to look up "feature gate" |
It is not working for me. I added "--feature-gates=DelegateFSGroupToCSIDriver=false" into the api-server manifest, after re-deploy completely, my redis pod still occurs "Permission denied" error. My kubernetes version is 1.23.10. |
You need to add the Feature Gate to every kubelet config for your cluster, since this is a kubelet gate, not an api server one. Depending on your installation method, there might be an easier way of changing the kubelet config |
Sorry I don't know how to do this. Can I delete the line csi.NodeServiceCapability_RPC_VOLUME_MOUNT_GROUP, in source code and recompile and install csi dirver? |
That would be, like, a lot more work than disabling the feature gate. Please read the whole message, before proceeding There are a few ways of doing that, manually should work most of the time, other approaches depend on your installation. ManuallyIf you have K8s installed locally, on VMs, or any other way, where your installation isn't managed by a cloud provider, you can:
featureGates:
DelegateFSGroupToCSIDriver: false
kubeadmkubeadm keeps the current kubelet config in a You can find the whole process well documented here. talos linuxJust putting that one in here, since I am working with that right now: In your worker configuration, add the feature gates, you want to en/disable like depicted in their docs here. OthersFor other installation methods, you'll have to consult their respective docs on how to disable kubelet feature gates. But yes, you could probably also remove this capability, recompile, rebuild the image and patch your deployment, but no guaranty on that. At this point, I might as well open a PR for this issue. We'll see. Hope this helps you and you get the driver running correctly. Let me now how it went, I'm invested now 😆 |
This makes sense, but how would it be accomplished on OpenShift? |
Sorry, I don't have any experience in OpenShift |
@rblundon for OpenShift problems you really SHOULD ask Red Hat |
@schoeppi5 Thank you for your help. I solved it by modifying the source code. I also tried modifying the kubeadm configmap, but it still doesn't work. Looking forward to the new release with your PR. |
@Ryanznoco - I will try reinstalling the CSI driver from your REPO. @inductor - I work for RH, but in sales, not engineering. Pretty sure this wouldn't get any traction as it looks to be na issue with the Synology Driver not doing what it is advertising t=it is capable of doing. |
The problem here is that this CSI just declare the capability but it does not do anything to actually implement that like for other CSI. See here an example https://github.com/kubernetes-csi/csi-driver-smb/pull/379/files We should point into the README about this limitation because more and more k8s clusters and bistro implement nowadays Security Context and those days were everything was running as a root likely and hopefully are gone. |
DelegateFSGroupToCSIDriver is enabled by default as of 1.26 since the feature is considered as GA. |
I also just upgraded to v1.26.1 and kubelet complained wouldn't allow me to disable the feature gate anymore. This is a pretty major bug which means the CSI driver doesn't work on k8s v1.26 or later. It's a pretty easy fix. |
hi @vaskozl, Of course! Thanks for letting us know about it! |
A new update of synology/synology-csi:latest is available now.
In the previous version, we have tried to implement securityContext like the example, but still missed something. We'll check and fix it in the future. |
@chihyuwu Are you willing to move the image to GitHub instead of Docker Hub? Rate limit can be problematic on some specific environment sharing same outgoing global IP address(es). |
When it will be released? |
Thanks @schoeppi5 for the detailed description! Adding the K3s instructions for those that might need them until it is resolved:
Works for me on K3s v1.25.3. |
Works on <=1.25. 1.26 has this flag completely disabled. |
@chihyuwu v1.1.1 still contains this bug? |
Hi @salzig, |
Sorry, I've already found the parameter for GID and UID, problem solved.
|
I am still unable to use the CIS driver to change the mounted FS @chihyuwu. I am running v1.1.3 |
Starting a container with a security context:
Still results in a disk that is root:root when using the iscsi provider. Is there a way to get the the user to match? I thought the |
After much work kubevirt/containerized-data-importer#2523 (comment) Pointed me at setting the CSIDriver to "File" which is now working. However, the chart doesn't support this so I had to disable the CSI in the chart and install my own. If this is expected (doesn't sound like it is) the chart should be updated. However, I'm not clear why this is necessary it seems like a bug and the default driver setting should work. |
@chris-sanders If I remember this issue correctly, the fix discussed above does not solve the issue but is a workaround. The CSI spec defines, that the driver declares certain capabilities, for example, if it is able to set the uid und gid of the mount. When a new PV is requested, the kubelet watches for this capability and defers setting the uid and gid in acordance to the security context to the CSI driver. The feature gate mentioned above, which was present in the kubelet until v1.25, disabled this behaviour and instructed it to always chown the new mount. |
@schoeppi5 yes I can confirm in 1.28 the feature gate was removed and the CSI Driver no longer works with the default method. I had to set the method to "File" to get things to work correctly. It is slow, but adding the I guess ideally the CSI should be doing the right thing, until then the helm chart should at least be defaulting to File with nodiscard. |
Ran into this issue when using the v1.1.3 image also. Turns out the v1.1.3 image provided by Synology includes the older binary v1.1.2 (#71). Building my own image resolved the issue. |
which commit did you build to? I am getting the error still on HEAD. |
I built fbe1b7e. You can also check your container logs to see the version that is running. The version number is logged on start up. |
Hi! I've run into this bug as well. Despite my container's process running as uid/gid 1000, I had to configure its pod to run as root to mount successfully and access PVC:
Otherwise, I could see the driver creating the dynamic volume in the NAS successfully, mounting it from the pod in the node, but the hub container didn't have enough permissions to read the volume. |
Installing the prometheus operator helm chart with defaults (https://prometheus-community.github.io/helm-charts, kube-prometheus-stack) is by default setting this for the prometheus instance:
securityContext:
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
This makes the "prometheus-kube-prometheus-stack-prometheus-0" pod go into a crash-loop with the error in logs: "nable to create mmap-ed active query log"
Changing the prometheusSpec securityContext like this:
securityContext:
runAsGroup: 0
runAsNonRoot: true
runAsUser: 0
fsGroup: 2000
makes it all work. But most likely running with root permissions then on the file system.
This seems to be an issue with the csi implementation where it doesn't support fsGroupSupport or similar. For example longhorn does this with "fsGroupPolicy: ReadWriteOnceWithFSType" which make each volume being examined at mount time to determine if permissions should be recursively applied.
The text was updated successfully, but these errors were encountered: