-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change from kubernetes-alpha provider to kubernetes provier #978
Comments
I have also been able to reproduce this error on Azure. BTW it's not Keycloak-related - that's the last successful 'refresh'. The problem is somewhere in dask gateway. If I cut out the dask gateway stuff from QHub then it deploys successfully. The "GroupVersionResource" problem seems familiar... I'll see if I can work out where I've seen it before! |
@shannon saw it here private link It's also came up here. |
Possibly related to Kubernetes versions (which have recently been updated for Azure). We might need to be clearer on which versions we are targetting. |
The original version was 1.22.2 (on which it failed). I'll try on Azure's Central US default version (1.20.9) to see if that works. Plus maybe 1.21.7 (default) on East US. |
Azure's Central US default version (1.20.9) gives similar but different errors: |
Kubernetes 1.21.7 (default) on East US works fine... |
And the integration test succeeded 12 days ago on: Of course these changes to versions have come about now that we are querying Azure to get the 'best' Kubernetes version. |
For history the reason we adopted the kubernetes alpha provider is it had support for custom resource definitions. Since about 6 ish months ago terraform kubernetes provider now supports crds. Will also need to restrict the version of kubernetes provider to be at least a certain version. |
@tylerpotts and I were able to get qhub to deploy successfully on minikube by removing most of the fields under The next step is to test these changes by deploying qhub on Azure. @danlester, @costrouc do you see any unintended consequences of these changes? |
It's helpful to know this worked, but I don't think we should move forward with this until we understand why it helped. It's possible the CRDs may need updating to match the version of Traefik that gets deployed. I'm not too sure where they were obtained in the first place, but one way would be to take the Traefik YAML definitions and run them through something like tfk8s to convert from native Kubernetes YAML to Terraform's HCL language. Or we could borrow from this repo - or just use that module directly in our code perhaps. Anyway, I think there was quite a serious error that Tyler showed me, and it would be good to know why it was happening. It is possible that Traefik itself contributed to the problem, but in fact I think that the problem is really isolated from Traefik - this is just a question of CRDs in Kubernetes, and there was a problem setting the CRD record. I have tried to produce a basic test case here but I don't get the error yet. You would set up minikube then run |
What was |
Resolved with #1003. Kubernetes-alpha is no longer being used. The primary complication was the crds and how they must be applied in a separate step and targets were making this complicated. |
GCP: Keycloak has been verified working for both Github and Auth0
Azure: When deploying from the
0c21c8c
commit in main I experienced the following error on deployment:It seems that this may be related to this issue but I haven't had a chance to investigate further.
The text was updated successfully, but these errors were encountered: