-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubernetes client initialization failed: no Auth Provider found for name "oidc" #669
Comments
@kcighon I have to admit I have never seen a kubeconfig with this block before
When using oidc to connect previously, I have seen the following:
Are you able to use the kubeconfig snippet you provided an run |
Hi @swade1987 - kubectl (and the flux cli) work as normal with this KUBECONFIG. |
@kcighon, I'm interested in whether this Kubeconfig works with the upstream Kubernetes Terraform provider. Additionally, can you please provide me with the full configuration block for the flux provider? |
Thanks @swade1987 . I ran a simple test to create a secret and it failed. The full provider code is as above and makes use of the fact that KUBE_CONFIG_PATH (required as kubernetes provider does not use KUBECONFIG by default) is set to our KUBECONFIG file. I've used the below TF code to test just the kubernetes provider: terraform {
required_version = ">=1.3.4"
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.26.0"
}
}
}
provider "kubernetes" { }
resource "kubernetes_secret_v1" "common_secrets" {
metadata {
name = "test"
namespace = "flux-system"
}
data = {
test = "test"
}
type = "Opaque"
}
There were no errors and the secret was created as defined. The error reported yesterday appears to be specific to the |
@swade1987 test creating a secret succeeded showing the kubernetes provider with KUBE_CONFIG_PATH set works if the KUBECONFIG file contains auth-provider name oidc. flux_bootstrap_git with the same setup (ie using KUBE_CONFIG_PATH) fails with the error in the OP. |
Hi @swade1987 - have you any further thoughts on this? |
+1 for this issue. We are experiencing this same issue with our terraform when running against clusters deployed with TKGI using OIDC authentication to the Kubernetes API. Our KUBECONFIG files provided by TKGI are much the same as the examples provided by @kcighon, and these are used daily for interacting with kubernetes clusters via kubectl, helm etc. |
@kcighon @Wildeone1 sorry I totally dropped the ball on this one, let me discuss this at the weekly flux project meeting this afternoon and get back to you as I think it may impact how the flux binary itself interacts with the cluster but I'll report back. |
@darkowlzz researched and found a relevant discussion in this issue: kubernetes/client-go#242. The suggested solution is to import the following package: k8s.io/client-go/plugin/pkg/client/auth/gcp. In the flux CLI, we already import the necessary auth package in main.go, as seen here: flux2/main.go#L33. However, the same import is missing in our Terraform provider. I believe importing k8s.io/client-go/plugin/pkg/client/auth in the Terraform provider should resolve the issue. @kcighon @Wildeone1, could you please test this solution if I create an rc release? This will help us confirm if the issue is fixed. Please note that the Terraform CLI’s module installer handles prereleases differently: it won’t consider a prerelease as a match for a version constraint unless the version constraint specifically names that prerelease. |
If you kindly create an rc and drop a note here, I'd be happy to validate. Thanks! |
Many thanks @swade1987 - 1.3.1-rc.1 is not currently published on terraform registry https://registry.terraform.io/providers/fluxcd/flux/latest |
@kcighon, sorry for the delayed response; the last few days have been hectic due to some personal issues. Unfortunately, we don’t cut release candidates for other Flux projects. To keep everything aligned, I recommend building the main branch locally using the following commands:
The last command ensures that your Terraform configuration uses the locally built Flux provider binary instead of the upstream version. Ensure you remove this once you've finished testing; otherwise, it will always use the locally built version. |
@swade1987 - thank you so much - that works perfectly with our tkgi setup!!! |
Love this, we will be doing a new release soon in line with the flux release schedule. As soon as we have an agreed upon date I'll let you know. Happy for me to mark the issue as closed? |
Definitely - thanks again!! |
@swade1987 thanks again. Happy that this is resolved. |
No problem at all and sorry for the delay. |
Hi @swade1987 - its been a while and no release to terraform registry. When will this be released? Many Thanks |
@kcighon this will be released with the next flux release which should be in a few weeks. You can track the release via fluxcd/flux2#4947 |
Describe the bug
We use kubernetes on tkgi and authenticate against the cluster with sso to create our KUBECONFIG with
tkgi login -a <tkgihostfqdn> -k -sso
export KUBECONFIG=~/.kube/<clustername>.yaml
tkgi get-kubeconfig <clustername> -a <tkgihostfqdn> -k -sso
The resultant KUBECONFIG looks like:
Running terraform apply gives the error:
Steps to reproduce
Our terraform code is as follows:
Expected behavior
With one of our TKGI clusters, this works as expected however the KUBECONFIG is token based only, not OIDC
Screenshots and recordings
No response
Terraform and provider versions
Terraform v1.7.3
fluxcd/flux Provider v1.2.3
Terraform provider configurations
flux_bootstrap_git resource
Flux version
v2.2.3
Additional context
No response
Code of Conduct
Would you like to implement a fix?
None
The text was updated successfully, but these errors were encountered: