You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
main project
provider "aws" {
region = "us-east-1"
assume_role_with_web_identity {
role_arn = "arn:aws:iam::${var.account_id}:role/REDACTED"
session_name = "sessionname"
web_identity_token_file = "token.txt"
}
default_tags {
tags = local.default_tags
}
}
this works fine and i can create the eks cluster
module:
data "aws_eks_cluster" "eks" {
name = module.eks.cluster_name
depends_on = [
module.eks.eks_managed_node_groups,
]
}
data "aws_eks_cluster_auth" "eks" {
name = module.eks.cluster_name
depends_on = [
module.eks.eks_managed_node_groups,
]
}
provider "kubernetes" {
host = data.aws_eks_cluster.eks.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.eks.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.eks.token
}
Question
The kubernetes provider fails to work and terraform tries to use the default service account of the seperate jenkins pod this tf project is running on which obviously fails. how do i get the kubernetes provider to assume the same iam role as the aws provider
The text was updated successfully, but these errors were encountered:
This is likely not an issue with the Kubernetes provider. I suspect it's either a permissions issue in your IAM configuration or something in the AWS provider's EKS auth datasource.
Does data.aws_eks_cluster_auth.eks.token produce any value at all? You can assign that to an output to check.
I’ve ended up having to install aws cli and jq on the container and then run aws sts assume role and then outputting the credentials to env vars and now it works.
Terraform version, Kubernetes provider version and Kubernetes version
Terraform configuration
Question
The text was updated successfully, but these errors were encountered: