This repository contains opinionated Terraform modules used to deploy and configure an AWS EKS cluster for the StreamNative Platform. It is currently underpinned by the terraform-aws-eks
module.
The working result is a Kubernetes cluster sized to your specifications, bootstrapped with StreamNative's Platform configuration, ready to receive a deployment of Apache Pulsar.
For more information on StreamNative Platform, head on over to our official documentation.
The Terraform command line tool is required and must be installed. It's what we're using to manage the creation of a Kubernetes cluster and its bootstrap configuration, along with the necessary cloud provider infrastructure.
We use Helm for deploying the StreamNative Platform charts on the cluster, and while not necessary, it's recommended to have it installed for debugging purposes.
Your caller identity must also have the necessary AWS IAM permissions to create and work with EC2 (EKS, VPCs, etc.) and Route53.
aws
command-line toolaws-iam-authenticator
command line tool
EKS has multiple modes of network configuration for how you access the EKS cluster endpoint, as well as how the node groups communicate with the EKS control plane.
This Terraform module supports the following:
- Public (EKS) / Private (Node Groups): The EKS cluster API server is accessible from the internet, and node groups use a private VPC endpoint to communicate with the cluster's controle plane (default configuration)
- Public (EKS) / Public (Node Groups): The EKS cluster API server is accessible from the internet, and node groups use a public EKS endpoint to communicate with the cluster's control plane. This mode can be enabled by setting the input
enable_node_group_private_networking = false
in the module.
Note: Currently we do not support fully private EKS clusters with this module (i.e. all network traffic remains internal to the AWS VPC)
For your VPC configuration we require sets of public and private subnets (minimum of two each, one per AWS AZ). Both groups of subnets must have an outbound configuration to the internet. We also recommend using a seperate VPC reserved for the EKS cluster, with a minimum CIDR block per subnet of /24
.
A Terraform sub-module is available that manages the VPC configuration to our specifications. It can be used in composition to the root module in this repo (see this example).
For more information on how EKS networking can be configured, refer to the following AWS guides:
- Networking in EKS
- Amazon EKS cluster endpoint access control
- De-mystifying cluster networking for Amazon EKS worker nodes
A bare minimum configuration to execute the module:
data "aws_eks_cluster" "cluster" {
name = module.eks_cluster.eks_cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks_cluster.eks_cluster_id
}
provider "aws" {
region = var.region
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
}
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
insecure = false
}
variable "region" {
default = "us-east-1"
}
module "sn_cluster" {
source = "streamnative/cloud/aws"
cluster_name = "sn-cluster-${var.region}"
cluster_version = "1.21"
hosted_zone_id = "Z04554535IN8Z31SKDVQ2" # Change this to your hosted zone ID
node_pool_max_size = 3
## Note: EKS requires two subnets, each in their own availability zone
public_subnet_ids = ["subnet-abcde012", "subnet-bcde012a"]
private_subnet_ids = ["subnet-vwxyz123", "subnet-efgh242a"]
region = var.region
vpc_id = "vpc-1234556abcdef"
}
In the example main.tf
above, a StreamNative Platform EKS cluster is created using Kubernetes version 1.21
.
By default, the cluster will come provisioned with 8 node groups (reference node topology1), six of which have a desired capacity set to 0
, and only the "xlarge" node group has a default desired capacity of 1
. All
In addition, the EKS cluster will be configured to support the following add-ons:
- AWS CSI Driver
- AWS Load Balancer Controller
- AWS Node Terminiation Handler
- cert-manager
- cluster-autoscaler
- external-dns
- Istio
- metrics-server
- Velero (for backup and restore)
When deploying StreamNative Platform, there are additional resources to be created alongside (and inside!) the EKS cluster:
- StreamNative operators for Pulsar
- Vault Configuration & Resources
We have made this easy by creating additional Terraform modules that can be included alongside your EKS module composition. Consider adding the following to the example main.tf
file above:
#######
### This module installs the necessary operators for StreamNative Platform
### See: https://registry.terraform.io/modules/streamnative/charts/helm/latest
#######
module "sn_bootstrap" {
source = "streamnative/charts/helm"
enable_function_mesh_operator = true
enable_vault_operator = true
enable_pulsar_operator = true
depends_on = [
module.sn_cluster,
]
}
To apply the configuration initialize the Terraform module in the directory containing your own version of the main.tf
from the examples above:
terraform init
Validate and apply the configuration:
terraform apply
We use a Helm chart to deploy StreamNative Platform on the receiving Kubernetes cluster. Refer to our official documentation for more info.
Note: Since this module manages all of the Kubernetes addon dependencies required by StreamNative Platform, it is not necessary to perform all of the steps outlined in the Helm chart's README.. Please reach out to your customer representative if you have questions.
Name | Version |
---|---|
terraform | >=1.1.0 |
aws | >=3.61.0 |
helm | 2.2.0 |
kubernetes | >=2.6.1 |
Name | Version |
---|---|
aws | 5.71.0 |
helm | 2.16.0 |
kubernetes | 2.33.0 |
Name | Source | Version |
---|---|---|
eks | terraform-aws-modules/eks/aws | 18.30.2 |
istio | github.com/streamnative/terraform-helm-charts//modules/istio-operator | v0.8.6 |
vpc_tags | ./modules/eks-vpc-tags | n/a |
Name | Description | Type | Default | Required |
---|---|---|---|---|
add_vpc_tags | Adds tags to VPC resources necessary for ingress resources within EKS to perform auto-discovery of subnets. Defaults to "true". Note that this may cause resource cycling (delete and recreate) if you are using Terraform to manage your VPC resources without having a lifecycle { ignore_changes = [ tags ] } block defined within them, since the VPC resources will want to manage the tags themselves and remove the ones added by this module. |
bool |
true |
no |
additional_tags | Additional tags to be added to the resources created by this module. | map(any) |
{} |
no |
allowed_public_cidrs | List of CIDR blocks that are allowed to access the EKS cluster's public endpoint. Defaults to "0.0.0.0/0" (any). | list(string) |
[ |
no |
asm_secret_arns | The a list of ARNs for secrets stored in ASM. This grants the kubernetes-external-secrets controller select access to secrets used by resources within the EKS cluster. If no arns are provided via this input, the IAM policy will allow read access to all secrets created in the provided region. | list(string) |
[] |
no |
aws_load_balancer_controller_helm_chart_name | The name of the Helm chart to use for the AWS Load Balancer Controller. | string |
"aws-load-balancer-controller" |
no |
aws_load_balancer_controller_helm_chart_repository | The repository containing the Helm chart to use for the AWS Load Balancer Controller. | string |
"https://aws.github.io/eks-charts" |
no |
aws_load_balancer_controller_helm_chart_version | The version of the Helm chart to use for the AWS Load Balancer Controller. The current version can be found in github: https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/helm/aws-load-balancer-controller/Chart.yaml. | string |
"1.4.2" |
no |
aws_load_balancer_controller_settings | Additional settings which will be passed to the Helm chart values for the AWS Load Balancer Controller. See https://github.com/kubernetes-sigs/aws-load-balancer-controller/tree/main/helm/aws-load-balancer-controller for available options. | map(string) |
{} |
no |
bootstrap_self_managed_addons | Indicates whether or not to bootstrap self-managed addons after the cluster has been created | bool |
null |
no |
cert_issuer_support_email | The email address to receive notifications from the cert issuer. | string |
"[email protected]" |
no |
cert_manager_helm_chart_name | The name of the Helm chart in the repository for cert-manager. | string |
"cert-manager" |
no |
cert_manager_helm_chart_repository | The repository containing the cert-manager helm chart. | string |
"https://charts.bitnami.com/bitnami" |
no |
cert_manager_helm_chart_version | Helm chart version for the cert-manager. See https://github.com/bitnami/charts/tree/master/bitnami/cert-manager for version releases. | string |
"0.6.2" |
no |
cert_manager_settings | Additional settings which will be passed to the Helm chart values. See https://github.com/bitnami/charts/tree/master/bitnami/cert-manager for available options. | map(any) |
{} |
no |
cilium_helm_chart_name | The name of the Helm chart in the repository for Cilium. | string |
"cilium" |
no |
cilium_helm_chart_repository | The repository containing the Cilium helm chart. | string |
"https://helm.cilium.io" |
no |
cilium_helm_chart_version | Helm chart version for Cilium. See https://artifacthub.io/packages/helm/cilium/cilium for updates. | string |
"1.13.2" |
no |
cluster_autoscaler_helm_chart_name | The name of the Helm chart in the repository for cluster-autoscaler. | string |
"cluster-autoscaler" |
no |
cluster_autoscaler_helm_chart_repository | The repository containing the cluster-autoscaler helm chart. | string |
"https://kubernetes.github.io/autoscaler" |
no |
cluster_autoscaler_helm_chart_version | Helm chart version for the cluster-autoscaler. Defaults to "9.10.4". See https://github.com/kubernetes/autoscaler/tree/master/charts/cluster-autoscaler for more details. | string |
"9.21.0" |
no |
cluster_autoscaler_settings | Additional settings which will be passed to the Helm chart values for cluster-autoscaler, see https://github.com/kubernetes/autoscaler/tree/master/charts/cluster-autoscaler for options. | map(any) |
{} |
no |
cluster_enabled_log_types | A list of the desired control plane logging to enable. For more information, see Amazon EKS Control Plane Logging documentation (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html). | list(string) |
[ |
no |
cluster_encryption_config | Configuration block with encryption configuration for the cluster. To disable secret encryption, set this value to {} |
any |
{} |
no |
cluster_iam | Cluster IAM settings | any |
null |
no |
cluster_name | The name of your EKS cluster and associated resources. Must be 16 characters or less. | string |
"" |
no |
cluster_networking | Cluster Networking settings | any |
null |
no |
cluster_security_group_additional_rules | Additional rules to add to the cluster security group. Set source_node_security_group = true inside rules to set the node_security_group as source. | any |
{} |
no |
cluster_security_group_id | The ID of an existing security group to use for the EKS cluster. If not provided, a new security group will be created. | string |
"" |
no |
cluster_service_ipv4_cidr | The CIDR block to assign Kubernetes service IP addresses from. If you don't specify a block, Kubernetes assigns addresses from either the 10.100.0.0/16 or 172.20.0.0/16 CIDR blocks | string |
null |
no |
cluster_version | The version of Kubernetes to be installed. | string |
"1.20" |
no |
create_cluster_security_group | Whether to create a new security group for the EKS cluster. If set to false, you must provide an existing security group via the cluster_security_group_id variable. | bool |
true |
no |
create_iam_policies | Whether to create IAM policies for the IAM roles. If set to false, the module will default to using existing policy ARNs that must be present in the AWS account | bool |
false |
no |
create_node_security_group | Whether to create a new security group for the EKS nodes. If set to false, you must provide an existing security group via the node_security_group_id variable. | bool |
true |
no |
csi_helm_chart_name | The name of the Helm chart in the repository for CSI. | string |
"aws-ebs-csi-driver" |
no |
csi_helm_chart_repository | The repository containing the CSI helm chart | string |
"https://kubernetes-sigs.github.io/aws-ebs-csi-driver/" |
no |
csi_helm_chart_version | Helm chart version for CSI | string |
"2.8.0" |
no |
csi_settings | Additional settings which will be passed to the Helm chart values, see https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/charts/aws-ebs-csi-driver/values.yaml for available options. | map(any) |
{} |
no |
disable_public_eks_endpoint | Whether to disable public access to the EKS control plane endpoint. If set to "true", additional configuration is required in order for the cluster to function properly, such as AWS PrivateLink for EC2, ECR, and S3, along with a VPN to access the EKS control plane. It is recommended to keep this setting to "false" unless you are familiar with this type of configuration. | bool |
false |
no |
disable_public_pulsar_endpoint | Whether or not to make the Istio Gateway use a public facing or internal network load balancer. If set to "true", additional configuration is required in order to manage the cluster from the StreamNative console | bool |
false |
no |
disk_encryption_kms_key_arn | The KMS Key ARN to use for EBS disk encryption. If not set, the default EBS encryption key will be used. | string |
"" |
no |
enable_bootstrap | Enables bootstrapping of add-ons within the cluster. | bool |
true |
no |
enable_cilium | Enables Cilium on the cluster. Set to "false" by default. | bool |
false |
no |
enable_cilium_taint | Adds the cillium taint to nodes. Is "true" by default. Should set to "false" if adding cillium to existing pool | bool |
true |
no |
enable_istio | Allows for enabling the bootstrap of Istio explicity in scenarios where the input "var.enable_bootstrap" is set to "false". | bool |
true |
no |
enable_node_group_private_networking | Enables private networking for the EKS node groups (not the EKS cluster endpoint, which remains public), meaning Kubernetes API requests that originate within the cluster's VPC use a private VPC endpoint for EKS. Defaults to "true". | bool |
true |
no |
enable_node_pool_monitoring | Enable CloudWatch monitoring for the default pool(s). | bool |
false |
no |
enable_nodes_use_public_subnet | When set to true, the node groups will use public subnet rather private subnet, and the public subnet must enable auto-assing public ip so that nodes can have public ip to access internet. Default is false. | bool |
false |
no |
enable_resource_creation | When enabled, all dependencies, like roles, buckets, etc will be created. When disabled, they will note. Use in combination with enable_bootstrap to manage these outside this module |
bool |
true |
no |
enable_sncloud_control_plane_access | Whether to enable access to the EKS control plane endpoint. If set to "false", additional configuration is required in order for the cluster to function properly, such as AWS PrivateLink for EC2, ECR, and S3, along with a VPN to access the EKS control plane. It is recommended to keep this setting to "true" unless you are familiar with this type of configuration. | bool |
true |
no |
enable_v3_node_groups | Enable v3 node groups, which uses a single ASG and all other node groups enabled elsewhere | bool |
false |
no |
enable_v3_node_migration | Enable v3 node and v2 node groups at the same time. Intended for use with migration to v3 nodes. | bool |
false |
no |
enable_v3_node_taints | When v3 node groups are enabled, use the node taints. Defaults to true | bool |
true |
no |
external_dns_helm_chart_name | The name of the Helm chart in the repository for ExternalDNS. | string |
"external-dns" |
no |
external_dns_helm_chart_repository | The repository containing the ExternalDNS helm chart. | string |
"https://charts.bitnami.com/bitnami" |
no |
external_dns_helm_chart_version | Helm chart version for ExternalDNS. See https://hub.helm.sh/charts/bitnami/external-dns for updates. | string |
"6.10.2" |
no |
external_dns_settings | Additional settings which will be passed to the Helm chart values, see https://hub.helm.sh/charts/bitnami/external-dns. | map(any) |
{} |
no |
hosted_zone_domain_name_filters | A list domain names of the Route53 hosted zones, used by the cluster's External DNS configuration for domain filtering. | list(string) |
[] |
no |
hosted_zone_id | The ID of the Route53 hosted zone used by the cluster's External DNS configuration. | string |
"*" |
no |
iam_path | An IAM Path to be used for all IAM resources created by this module. Changing this from the default will cause issues with StreamNative's Vendor access, if applicable. | string |
"/StreamNative/" |
no |
istio_mesh_id | The ID used by the Istio mesh. This is also the ID of the StreamNative Cloud Pool used for the workload environments. This is required when "enable_istio_operator" is set to "true". | string |
null |
no |
istio_network | The name of network used for the Istio deployment. This is required when "enable_istio_operator" is set to "true". | string |
"default" |
no |
istio_profile | The path or name for an Istio profile to load. Set to the profile "default" if not specified. | string |
"default" |
no |
istio_revision_tag | The revision tag value use for the Istio label "istio.io/rev". | string |
"sn-stable" |
no |
istio_settings | Additional settings which will be passed to the Helm chart values | map(any) |
{} |
no |
istio_trust_domain | The trust domain used for the Istio deployment, which corresponds to the root of a system. This is required when "enable_istio_operator" is set to "true". | string |
"cluster.local" |
no |
kiali_operator_settings | Additional settings which will be passed to the Helm chart values | map(any) |
{} |
no |
manage_aws_auth_configmap | Whether to manage the aws_auth configmap | bool |
true |
no |
map_additional_iam_roles | A list of IAM role bindings to add to the aws-auth ConfigMap. | list(object({ |
[] |
no |
metrics_server_helm_chart_name | The name of the helm release to install | string |
"metrics-server" |
no |
metrics_server_helm_chart_repository | The repository containing the external-metrics helm chart. | string |
"https://kubernetes-sigs.github.io/metrics-server" |
no |
metrics_server_helm_chart_version | Helm chart version for Metrics server | string |
"3.8.2" |
no |
metrics_server_settings | Additional settings which will be passed to the Helm chart values, see https://github.com/external-secrets/kubernetes-external-secrets/tree/master/charts/kubernetes-external-secrets for available options. | map(any) |
{} |
no |
migration_mode | Whether to enable migration mode for the cluster. This is used to migrate details from existing security groups, which have had their names and description changed in versions v18.X of the community EKS module. | bool |
false |
no |
migration_mode_node_sg_name | The name (not ID!) of the existing security group used by worker nodes. This is required when "migration_mode" is set to "true", otherwise the parent module will attempt to set a new security group name and destroy the existin one. | string |
null |
no |
node_groups | Map of EKS managed node group definitions to create | any |
null |
no |
node_pool_ami_id | The AMI ID to use for the EKS cluster nodes. Defaults to the latest EKS Optimized AMI provided by AWS. | string |
"" |
no |
node_pool_azs | A list of availability zones to use for the EKS node group. If not set, the module will use the same availability zones with the cluster. | list(string) |
[] |
no |
node_pool_block_device_name | The name of the block device to use for the EKS cluster nodes. | string |
"/dev/nvme0n1" |
no |
node_pool_desired_size | Desired number of worker nodes in the node pool. | number |
0 |
no |
node_pool_disk_iops | The amount of provisioned IOPS for the worker node root EBS volume. | number |
3000 |
no |
node_pool_disk_size | Disk size in GiB for worker nodes in the node pool. Defaults to 50. | number |
100 |
no |
node_pool_disk_type | Disk type for worker nodes in the node pool. Defaults to gp3. | string |
"gp3" |
no |
node_pool_ebs_optimized | If true, the launched EC2 instance(s) will be EBS-optimized. Specify this if using a custom AMI with pre-user data. | bool |
true |
no |
node_pool_instance_types | Set of instance types associated with the EKS Node Groups. Defaults to ["m6i.large", "m6i.xlarge", "m6i.2xlarge", "m6i.4xlarge", "m6i.8xlarge"], which will create empty node groups of each instance type to account for any workload configurable from StreamNative Cloud. | list(string) |
[ |
no |
node_pool_labels | A map of kubernetes labels to add to the node pool. | map(string) |
{} |
no |
node_pool_max_size | The maximum size of the node pool Autoscaling group. | number |
n/a | yes |
node_pool_min_size | The minimum size of the node pool AutoScaling group. | number |
0 |
no |
node_pool_pre_userdata | The user data to apply to the worker nodes in the node pool. This is applied before the bootstrap.sh script. | string |
"" |
no |
node_pool_tags | A map of tags to add to the node groups and supporting resources. | map(string) |
{} |
no |
node_pool_taints | A list of taints in map format to apply to the node pool. | any |
{} |
no |
node_security_group_additional_rules | Additional ingress rules to add to the node security group. Set source_cluster_security_group = true inside rules to set the cluster_security_group as source | any |
{} |
no |
node_security_group_id | An ID of an existing security group to use for the EKS node groups. If not specified, a new security group will be created. | string |
"" |
no |
node_termination_handler_chart_version | The version of the Helm chart to use for the AWS Node Termination Handler. | string |
"0.18.5" |
no |
node_termination_handler_helm_chart_name | The name of the Helm chart to use for the AWS Node Termination Handler. | string |
"aws-node-termination-handler" |
no |
node_termination_handler_helm_chart_repository | The repository containing the Helm chart to use for the AWS Node Termination Handler. | string |
"https://aws.github.io/eks-charts" |
no |
node_termination_handler_settings | Additional settings which will be passed to the Helm chart values for the AWS Node Termination Handler. See https://github.com/kubernetes-sigs/aws-load-balancer-controller/tree/main/helm/aws-load-balancer-controller for available options. | map(string) |
{} |
no |
permissions_boundary_arn | If required, provide the ARN of the IAM permissions boundary to use for restricting StreamNative's vendor access. | string |
null |
no |
private_subnet_ids | The ids of existing private subnets. | list(string) |
[] |
no |
public_subnet_ids | The ids of existing public subnets. | list(string) |
[] |
no |
region | The AWS region. | string |
null |
no |
s3_encryption_kms_key_arn | KMS key ARN to use for S3 encryption. If not set, the default AWS S3 key will be used. | string |
"" |
no |
service_domain | When Istio is enabled, the FQDN needed specifically for Istio's authorization policies. | string |
"" |
no |
sncloud_services_iam_policy_arn | The IAM policy ARN to be used for all StreamNative Cloud Services that need to interact with AWS services external to EKS. This policy is typically created by StreamNative's "terraform-managed-cloud" module, as a seperate customer driven process for managing StreamNative's Vendor Access into AWS. If no policy ARN is provided, the module will default to the expected named policy of "StreamNativeCloudRuntimePolicy". This variable allows for flexibility in the event that the policy name changes, or if a custom policy provided by the customer is preferred. | string |
"" |
no |
sncloud_services_lb_policy_arn | A custom IAM policy ARN for LB load balancer controller. This policy is typically created by StreamNative's "terraform-managed-cloud" module, as a seperate customer driven process for managing StreamNative's Vendor Access into AWS. If no policy ARN is provided, the module will default to the expected named policy of "StreamNativeCloudLBPolicy". This variable allows for flexibility in the event that the policy name changes, or if a custom policy provided by the customer is preferred. | string |
"" |
no |
use_runtime_policy | Legacy variable, will be deprecated in future versions. The preference of this module is to have the parent EKS module create and manage the IAM role. However some older configurations may have had the cluster IAM role managed seperately, and this variable allows for backwards compatibility. | bool |
false |
no |
v3_node_group_core_instance_type | The instance to use for the core node group | string |
"m6i.large" |
no |
velero_backup_schedule | The scheduled time for Velero to perform backups. Written in cron expression, defaults to "0 5 * * *" or "at 5:00am every day" | string |
"0 5 * * *" |
no |
velero_excluded_namespaces | A comma-separated list of namespaces to exclude from Velero backups. Defaults are set to ["default", "kube-system", "operators", "olm"]. | list(string) |
[ |
no |
velero_helm_chart_name | The name of the Helm chart to use for Velero | string |
"velero" |
no |
velero_helm_chart_repository | The repository containing the Helm chart to use for velero | string |
"https://vmware-tanzu.github.io/helm-charts" |
no |
velero_helm_chart_version | The version of the Helm chart to use for Velero. The current version can be found in github: https://github.com/vmware-tanzu/helm-charts/tree/main/charts/velero | string |
"2.31.8" |
no |
velero_namespace | The kubernetes namespace where Velero should be deployed. | string |
"velero" |
no |
velero_plugin_version | Which version of the velero-plugin-for-aws to use. | string |
"v1.5.1" |
no |
velero_policy_arn | The arn for the IAM policy used by the Velero backup addon service. For enhanced security, we allow for IAM policies used by cluster addon services to be created seperately from this module. This is only required if the input "create_iam_policy_for_velero" is set to "false". If created elsewhere, the expected name of the policy is "StreamNativeCloudVeleroBackupPolicy". | string |
null |
no |
velero_settings | Additional settings which will be passed to the Helm chart values for Velero. See https://github.com/vmware-tanzu/helm-charts/tree/main/charts/velero for available options | map(string) |
{} |
no |
vpc_id | The ID of the AWS VPC to use. | string |
"" |
no |
Name | Description |
---|---|
aws_loadbalancer_arn | ARN for loadbalancer |
cert_manager_arn | The ARN for Cert Manager |
cluster_autoscaler_arn | ARN for Cluster Autoscaler |
csi_arn | ARN for csi |
eks | All outputs of module.eks for provide convenient approach to access child module's outputs. |
eks_cluster_arn | The ARN for the EKS cluster created by this module |
eks_cluster_certificate_authority_data | Base64 encoded certificate data required to communicate with the cluster |
eks_cluster_endpoint | The endpoint for the EKS cluster created by this module |
eks_cluster_id | The id/name of the EKS cluster created by this module |
eks_cluster_identity_oidc_issuer_arn | The ARN for the OIDC issuer created by this module |
eks_cluster_identity_oidc_issuer_string | A formatted string containing the prefix for the OIDC issuer created by this module. Same as "cluster_oidc_issuer_url", but with "https://" stripped from the name. This output is typically used in other StreamNative modules that request the "oidc_issuer" input. |
eks_cluster_identity_oidc_issuer_url | The URL for the OIDC issuer created by this module |
eks_cluster_platform_version | The platform version for the EKS cluster created by this module |
eks_cluster_primary_security_group_id | The id of the primary security group created by the EKS service itself, not by this module. This is labeled "Cluster Security Group" in the EKS console. |
eks_cluster_secondary_security_group_id | The id of the secondary security group created by this module. This is labled "Additional Security Groups" in the EKS console. |
eks_node_group_iam_role_arn | The IAM Role ARN used by the Worker configuration |
eks_node_group_security_group_id | Security group ID attached to the EKS node groups |
eks_node_groups | Map of all attributes of the EKS node groups created by this module |
external_dns_arn | The ARN for External DNS |
inuse_azs | The availability zones in which the EKS nodes is deployed |
tiered_storage_s3_bucket_arn | The ARN for the tiered storage S3 bucket created by this module |
velero_arn | ARN for Velero |
velero_s3_bucket_arn | The ARN for the Velero S3 bucket created by this module |
Footnotes
-
When running Apache Pulsar in Kubernetes, we make use of EBS backed Kubernetes Persistent Volume Claims (PVC). EBS volumes themselves are zonal, which means an EC2 instance can only mount a volume that exists in its same AWS Availability Zone. For this reason we have added node group "zone affinity" functionality into our module, where an EKS node group is created per AWS Availability Zone. This is controlled by the number of subnets you pass to the EKS module, creating one node group per subnet. In addition, we also create node groups based on instance classes, which allows us to perform more fine tuned control around scheduling and resource utilization. To illustrate, if a cluster is being created across 3 availability zones and the default 4 instance classes are being used, then 12 total node groups will be created, all except the nodes belonging to the
xlarge
(which has a default capicty of1
for initial scheduling of workloads) group will remain empty until a corresponding Pulsar or addon workload is deployed. ↩