Skip to content

Commit

Permalink
Merge pull request #163 from sassoftware/staging
Browse files Browse the repository at this point in the history
5.2.0 - October 3, 2022
  • Loading branch information
dhoucgitter authored Oct 3, 2022
2 parents d88cc6b + ecc2672 commit a6f0c35
Show file tree
Hide file tree
Showing 18 changed files with 288 additions and 67 deletions.
4 changes: 2 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ ARG AWS_CLI_VERSION=2.1.29
FROM hashicorp/terraform:$TERRAFORM_VERSION as terraform

FROM amazon/aws-cli:$AWS_CLI_VERSION
ARG KUBECTL_VERSION=1.21.7
ARG KUBECTL_VERSION=1.22.10

WORKDIR /viya4-iac-aws

Expand All @@ -17,7 +17,7 @@ RUN yum -y install git openssh jq which \
&& chmod g=u -R /etc/passwd /etc/group /viya4-iac-aws \
&& git config --system --add safe.directory /viya4-iac-aws \
&& terraform init

ENV TF_VAR_iac_tooling=docker
ENTRYPOINT ["/viya4-iac-aws/docker-entrypoint.sh"]
VOLUME ["/workspace"]
20 changes: 10 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ This project contains Terraform scripts to provision the AWS cloud infrastructur
This project helps you to automate the cluster-provisioning phase of SAS Viya deployment. To learn about all phases and options of the
SAS Viya deployment process, see [Getting Started with SAS Viya and Azure Kubernetes Service](https://go.documentation.sas.com/doc/en/itopscdc/default/itopscon/n1d7qc4nfr3s5zn103a1qy0kj4l1.htm) in _SAS® Viya® Operations_.

Once the cloud resources are provisioned, use the [viya4-deployment](https://github.com/sassoftware/viya4-deployment) project to deploy
Once the cloud resources are provisioned, use the [viya4-deployment](https://github.com/sassoftware/viya4-deployment) project to deploy
SAS Viya 4 in your cloud environment. For more information about SAS Viya 4 requirements and documentation for the deployment
process, refer to the [SAS Viya 4 Operations Guide](https://go.documentation.sas.com/doc/en/itopscdc/default/itopswlcm/home.htm).

Expand All @@ -35,22 +35,22 @@ Use of these tools requires operational knowledge of the following technologies:
This project supports two options for running Terraform scripts:
- Terraform installed on your local machine
- Using a Docker container to run Terraform (Docker is required)

For more information, see [Docker Usage](./docs/user/DockerUsage.md). Using Docker to run the Terraform scripts is recommended.

The following are also required:
- Access to an **AWS account** with a user that is associated with the applied [IAM Policy](./files/policies/devops-iac-eks-policy.json)
- Subscription to [Ubuntu 20.04 LTS - Focal](https://aws.amazon.com/marketplace/pp/prodview-iftkyuwv2sjxi)

#### Terraform Requirements:

- [Terraform](https://www.terraform.io/downloads.html) v1.0.0
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) - v1.21.7
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) - v1.22.10
- [jq](https://stedolan.github.io/jq/) v1.6
- [AWS CLI](https://aws.amazon.com/cli) (optional; useful as an alternative to the AWS Web Console) v2.1.29

#### Docker Requirements:

- [Docker](https://docs.docker.com/get-docker/)

## Getting Started
Expand All @@ -75,20 +75,20 @@ In order to create and destroy AWS resources on your behalf, Terraform needs an
### Customize Input Values

Terraform scripts require variable definitions as input. Review and modify default values to meet your requirements. Create a file named
`terraform.tfvars` to customize any input variable value documented in the [CONFIG-VARS.md](docs/CONFIG-VARS.md) file.
`terraform.tfvars` to customize any input variable value documented in the [CONFIG-VARS.md](docs/CONFIG-VARS.md) file.

To get started, you can copy one of the example variable definition files provided in the [examples](./examples) folder. For more information about the
variables that are declared in each file, refer to the [CONFIG-VARS.md](docs/CONFIG-VARS.md) file.

**NOTE:** You will need to update the `cidr_blocks` in the [variables.tf](variables.tf) file to allow traffic from your current network. Without these rules,
**NOTE:** You will need to update the `cidr_blocks` in the [variables.tf](variables.tf) file to allow traffic from your current network. Without these rules,
access to the cluster will only be allowed via the AWS Console.

You have the option to specify variable definitions that are not included in `terraform.tfvars` or to use a variable definition file other than
`terraform.tfvars`. See [Advanced Terraform Usage](docs/user/AdvancedTerraformUsage.md) for more information.

## Create and Manage Cloud Resources

Create and manage the required cloud resources. Perform one of the following steps, based on whether you are using Docker:
Create and manage the required cloud resources. Perform one of the following steps, based on whether you are using Docker:

- run [Terraform](docs/user/TerraformUsage.md) directly on your workstation
- run the [Docker container](docs/user/DockerUsage.md) (recommended)
Expand Down
6 changes: 3 additions & 3 deletions docs/CONFIG-VARS.md
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ Custom policy:
| <div style="width:50px">Name</div> | <div style="width:150px">Description</div> | <div style="width:50px">Type</div> | <div style="width:75px">Default</div> | <div style="width:150px">Notes</div> |
| :--- | :--- | :--- | :--- | :--- |
| create_static_kubeconfig | Allows the user to create a provider- or service account-based kubeconfig file | bool | false | A value of `false` defaults to using the cloud provider's mechanism for generating the kubeconfig file. A value of `true` creates a static kubeconfig that uses a service account and cluster role binding to provide credentials. |
| kubernetes_version | The EKS cluster Kubernetes version | string | "1.21" | |
| kubernetes_version | The EKS cluster Kubernetes version | string | "1.22" | |
| create_jump_vm | Create bastion host (jump VM) | bool | true| |
| create_jump_public_ip | Add public IP address to jump VM | bool | true | |
| jump_vm_admin | OS admin user for the jump VM | string | "jumpuser" | |
Expand Down Expand Up @@ -298,7 +298,7 @@ Each server element, like `foo = {}`, can contain none, some, or all of the para
<!--| Name | Description | Type | Default | Notes | -->
| <div style="width:50px">Name</div> | <div style="width:150px">Description</div> | <div style="width:50px">Type</div> | <div style="width:75px">Default</div> | <div style="width:150px">Notes</div> |
| :--- | :--- | :--- | :--- | :--- |
| server_version | The version of the PostgreSQL server | string | "11" | Changing this value trigger resource recreation |
| server_version | The version of the PostgreSQL server | string | "13" | Refer to the [Viya 4 Administration Guide](https://go.documentation.sas.com/doc/en/sasadmincdc/default/itopssr/p05lfgkwib3zxbn1t6nyihexp12n.htm?fromDefault=#p1wq8ouke3c6ixn1la636df9oa1u) for the supported versions of PostgreSQL for SAS Viya. |
| instance_type | The VM type for the PostgreSQL Server | string | "db.m5.xlarge" | |
| storage_size | Max storage allowed for the PostgreSQL server in MB | number | 50 | |
| backup_retention_days | Backup retention days for the PostgreSQL server | number | 7 | Supported values are between 7 and 35 days. |
Expand Down Expand Up @@ -328,7 +328,7 @@ database_servers = {
deletion_protection = false
administrator_login = "cpsadmin"
administrator_password = "1tsAB3aut1fulDay"
server_version = "12"
server_version = "13"
server_port = "5432"
ssl_enforcement_enabled = true
parameters = [{ "apply_method": "immediate", "name": "foo" "value": "true" }, { "apply_method": "immediate", "name": "bar" "value": "false" }]
Expand Down
8 changes: 4 additions & 4 deletions examples/sample-input-byo.tfvars
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# !NOTE! - These are only a subset of the variables in CONFIG-VARS.md provided
# as examples. Customize this file to add any variables from CONFIG-VARS.md whose
# as examples. Customize this file to add any variables from CONFIG-VARS.md whose
# default values you want to change.

# **************** REQUIRED VARIABLES ****************
Expand All @@ -13,7 +13,7 @@ vpc_id = "<existing-vpc-id>" # only needed if using pre-existing VPC
subnet_ids = { # only needed if using pre-existing subnets
"public" : ["existing-public-subnet-id1", "existing-public-subnet-id2"],
"private" : ["existing-private-subnet-id1", "existing-private-subnet-id2"],
"database" : ["existing-database-subnet-id1", "existing-database-subnet-id2"] # only when 'create_postgres=true'
"database" : ["existing-database-subnet-id1", "existing-database-subnet-id2"] # only when 'create_postgres=true'
}
nat_id = "<existing-NAT-gateway-id>"
security_group_id = "<existing-security-group-id>" # only needed if using pre-existing Security Group
Expand All @@ -37,12 +37,12 @@ postgres_servers = {
}

## Cluster config
kubernetes_version = "1.21"
kubernetes_version = "1.22"
default_nodepool_node_count = 2
default_nodepool_vm_type = "m5.2xlarge"
default_nodepool_custom_data = ""

## General
## General
efs_performance_mode = "maxIO"
storage_type = "standard"

Expand Down
4 changes: 2 additions & 2 deletions examples/sample-input-connect.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,12 @@ postgres_servers = {
}

## Cluster config
kubernetes_version = "1.21"
kubernetes_version = "1.22"
default_nodepool_node_count = 2
default_nodepool_vm_type = "m5.2xlarge"
default_nodepool_custom_data = ""

## General
## General
efs_performance_mode = "maxIO"
storage_type = "standard"

Expand Down
26 changes: 13 additions & 13 deletions examples/sample-input-custom-data.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -27,18 +27,18 @@ postgres_servers = {
}

## Cluster config
kubernetes_version = "1.21"
kubernetes_version = "1.22"
default_nodepool_node_count = 2
default_nodepool_vm_type = "m5.2xlarge"
default_nodepool_custom_data = ""

## General
## General
efs_performance_mode = "maxIO"
storage_type = "standard"

## Cluster Node Pools config
node_pools = {
cas = {
cas = {
"vm_type" = "i3.8xlarge"
"cpu_type" = "AL2_x86_64"
"os_disk_type" = "gp2"
Expand All @@ -47,15 +47,15 @@ node_pools = {
"min_nodes" = 1
"max_nodes" = 5
"node_taints" = ["workload.sas.com/class=cas:NoSchedule"]
"node_labels" = {
"workload.sas.com/class" = "cas"
"node_labels" = {
"workload.sas.com/class" = "cas"
}
"custom_data" = "./files/custom-data/additional_userdata.sh"
"metadata_http_endpoint" = "enabled"
"metadata_http_tokens" = "required"
"metadata_http_put_response_hop_limit" = 1
},
compute = {
compute = {
"vm_type" = "m5.8xlarge"
"cpu_type" = "AL2_x86_64"
"os_disk_type" = "gp2"
Expand All @@ -73,7 +73,7 @@ node_pools = {
"metadata_http_tokens" = "required"
"metadata_http_put_response_hop_limit" = 1
},
stateless = {
stateless = {
"vm_type" = "m5.4xlarge"
"cpu_type" = "AL2_x86_64"
"os_disk_type" = "gp2"
Expand All @@ -82,15 +82,15 @@ node_pools = {
"min_nodes" = 1
"max_nodes" = 5
"node_taints" = ["workload.sas.com/class=stateless:NoSchedule"]
"node_labels" = {
"workload.sas.com/class" = "stateless"
"node_labels" = {
"workload.sas.com/class" = "stateless"
}
"custom_data" = ""
"metadata_http_endpoint" = "enabled"
"metadata_http_tokens" = "required"
"metadata_http_put_response_hop_limit" = 1
},
stateful = {
},
stateful = {
"vm_type" = "m5.4xlarge"
"cpu_type" = "AL2_x86_64"
"os_disk_type" = "gp2"
Expand All @@ -99,8 +99,8 @@ node_pools = {
"min_nodes" = 1
"max_nodes" = 3
"node_taints" = ["workload.sas.com/class=stateful:NoSchedule"]
"node_labels" = {
"workload.sas.com/class" = "stateful"
"node_labels" = {
"workload.sas.com/class" = "stateful"
}
"custom_data" = ""
"metadata_http_endpoint" = "enabled"
Expand Down
4 changes: 2 additions & 2 deletions examples/sample-input-gpu.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,12 @@ postgres_servers = {
}

## Cluster config
kubernetes_version = "1.21"
kubernetes_version = "1.22"
default_nodepool_node_count = 2
default_nodepool_vm_type = "m5.2xlarge"
default_nodepool_custom_data = ""

## General
## General
efs_performance_mode = "maxIO"
storage_type = "standard"

Expand Down
26 changes: 13 additions & 13 deletions examples/sample-input-ha.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -30,18 +30,18 @@ postgres_servers = {
ssh_public_key = "~/.ssh/id_rsa.pub"

## Cluster config
kubernetes_version = "1.21"
kubernetes_version = "1.22"
default_nodepool_node_count = 2
default_nodepool_vm_type = "m5.2xlarge"
default_nodepool_custom_data = ""

## General
## General
efs_performance_mode = "maxIO"
storage_type = "ha"

## Cluster Node Pools config
node_pools = {
cas = {
cas = {
"vm_type" = "i3.8xlarge"
"cpu_type" = "AL2_x86_64"
"os_disk_type" = "gp2"
Expand All @@ -50,15 +50,15 @@ node_pools = {
"min_nodes" = 1
"max_nodes" = 5
"node_taints" = ["workload.sas.com/class=cas:NoSchedule"]
"node_labels" = {
"workload.sas.com/class" = "cas"
"node_labels" = {
"workload.sas.com/class" = "cas"
}
"custom_data" = "./files/custom-data/additional_userdata.sh"
"metadata_http_endpoint" = "enabled"
"metadata_http_tokens" = "required"
"metadata_http_put_response_hop_limit" = 1
},
compute = {
compute = {
"vm_type" = "m5.8xlarge"
"cpu_type" = "AL2_x86_64"
"os_disk_type" = "gp2"
Expand All @@ -76,7 +76,7 @@ node_pools = {
"metadata_http_tokens" = "required"
"metadata_http_put_response_hop_limit" = 1
},
stateless = {
stateless = {
"vm_type" = "m5.4xlarge"
"cpu_type" = "AL2_x86_64"
"os_disk_type" = "gp2"
Expand All @@ -85,15 +85,15 @@ node_pools = {
"min_nodes" = 1
"max_nodes" = 5
"node_taints" = ["workload.sas.com/class=stateless:NoSchedule"]
"node_labels" = {
"workload.sas.com/class" = "stateless"
"node_labels" = {
"workload.sas.com/class" = "stateless"
}
"custom_data" = ""
"metadata_http_endpoint" = "enabled"
"metadata_http_tokens" = "required"
"metadata_http_put_response_hop_limit" = 1
},
stateful = {
},
stateful = {
"vm_type" = "m5.4xlarge"
"cpu_type" = "AL2_x86_64"
"os_disk_type" = "gp2"
Expand All @@ -102,8 +102,8 @@ node_pools = {
"min_nodes" = 1
"max_nodes" = 3
"node_taints" = ["workload.sas.com/class=stateful:NoSchedule"]
"node_labels" = {
"workload.sas.com/class" = "stateful"
"node_labels" = {
"workload.sas.com/class" = "stateful"
}
"custom_data" = ""
"metadata_http_endpoint" = "enabled"
Expand Down
8 changes: 4 additions & 4 deletions examples/sample-input-minimal.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,12 @@ tags = { } # e.g., { "key1" = "value1", "key2
# }

## Cluster config
kubernetes_version = "1.21"
kubernetes_version = "1.22"
default_nodepool_node_count = 1
default_nodepool_vm_type = "m5.large"
default_nodepool_custom_data = ""

## General
## General
efs_performance_mode = "maxIO"
storage_type = "standard"

Expand All @@ -48,8 +48,8 @@ node_pools = {
"min_nodes" = 0
"max_nodes" = 5
"node_taints" = ["workload.sas.com/class=cas:NoSchedule"]
"node_labels" = {
"workload.sas.com/class" = "cas"
"node_labels" = {
"workload.sas.com/class" = "cas"
}
"custom_data" = ""
"metadata_http_endpoint" = "enabled"
Expand Down
4 changes: 2 additions & 2 deletions examples/sample-input.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,12 @@ postgres_servers = {
}

## Cluster config
kubernetes_version = "1.21"
kubernetes_version = "1.22"
default_nodepool_node_count = 2
default_nodepool_vm_type = "m5.2xlarge"
default_nodepool_custom_data = ""

## General
## General
efs_performance_mode = "maxIO"
storage_type = "standard"

Expand Down
Loading

0 comments on commit a6f0c35

Please sign in to comment.