diff --git a/_add-clusters-workloads.html.md.erb b/_add-clusters-workloads.html.md.erb index 2c3b64a4b..c13caaab7 100644 --- a/_add-clusters-workloads.html.md.erb +++ b/_add-clusters-workloads.html.md.erb @@ -2,7 +2,7 @@ 1. Add more workloads and create an additional cluster. For more information, see About Cluster Upgrades in _Maintaining Workload Uptime_ and Creating Clusters. - 1. Monitor the <%= vars.product_short %> control plane in the <%= vars.product_tile %> tile > Status tab. - Review the load and resource usage data for the <%= vars.control_plane %> and <%= vars.control_plane_db %> VMs. + 1. Monitor the Tanzu Kubernetes Grid Integrated Edition control plane in the Tanzu Kubernetes Grid Integrated Edition tile > Status tab. + Review the load and resource usage data for the TKGI API and TKGI Database VMs. If any levels are at capacity, scale up the VMs.
diff --git a/_api.html.md.erb b/_api.html.md.erb index cf0037a6a..c2cca38e0 100644 --- a/_api.html.md.erb +++ b/_api.html.md.erb @@ -1,23 +1,23 @@ Perform the following steps: -1. Click **<%= vars.control_plane %>**. +1. Click **TKGI API**. -1. Under **Certificate to secure the <%= vars.control_plane %>**, provide a certificate and private key pair. +1. Under **Certificate to secure the TKGI API**, provide a certificate and private key pair.
- ![<%= vars.control_plane %> pane configuration](images/tkgi-api.png) + ![TKGI API pane configuration](images/tkgi-api.png)
- The certificate that you supply must cover the specific subdomain that routes to the <%= vars.control_plane %> VM with TLS termination on the ingress. + The certificate that you supply must cover the specific subdomain that routes to the TKGI API VM with TLS termination on the ingress. If you use UAA as your OIDC provider, this certificate must be a proper certificate chain and have a SAN field.

Warning: TLS certificates generated for wildcard DNS records only work for a single domain level. For example, a certificate generated for *.tkgi.EXAMPLE.com does not permit communication to *.api.tkgi.EXAMPLE.com. - If the certificate does not contain the correct FQDN for the <%= vars.control_plane %>, calls to the API will fail.

+ If the certificate does not contain the correct FQDN for the TKGI API, calls to the API will fail.

You can enter your own certificate and private key pair, or have Ops Manager generate one for you.
To generate a certificate using Ops Manager: 1. Click **Generate RSA Certificate** for a new install or **Change** to update a previously-generated certificate. - 1. Enter the domain for your API hostname. This must match the domain you configure under **<%= vars.control_plane %>** > **API Hostname (FQDN)** below, in the same pane. It can be a standard FQDN or a wildcard domain. + 1. Enter the domain for your API hostname. This must match the domain you configure under **TKGI API** > **API Hostname (FQDN)** below, in the same pane. It can be a standard FQDN or a wildcard domain. 1. Click **Generate**.
- ![<%= vars.control_plane %> certificate generation](images/tkgi-api-cert-gen.png) + ![TKGI API certificate generation](images/tkgi-api-cert-gen.png) <% if current_page.data.iaas == "GCP" %>

Note: If you deployed a global HTTP load balancer for Ops Manager without a certificate, you can configure the load balancer to use this newly-generated certificate. @@ -26,14 +26,14 @@ Perform the following steps: Preparing to Deploy Ops Manager on GCP Manually.

<% else %> <% end %> -1. Under **API Hostname (FQDN)**, enter the FQDN that you registered to point to the <%= vars.control_plane %> load balancer, such as `api.tkgi.example.com`. -To retrieve the public IP address or FQDN of the <%= vars.control_plane %> load balancer, +1. Under **API Hostname (FQDN)**, enter the FQDN that you registered to point to the TKGI API load balancer, such as `api.tkgi.example.com`. +To retrieve the public IP address or FQDN of the TKGI API load balancer, log in to your IaaS console. -

Note: The FQDN for the <%= vars.k8s_runtime_abbr %> API must not contain uppercase letters or trailing whitespace.

+

Note: The FQDN for the TKGI API must not contain uppercase letters or trailing whitespace.

1. Under **Worker VM Max in Flight**, enter the maximum number of non-canary worker instances to create, update or upgrade in parallel within an availability zone.

This field sets the `max_in_flight` variable value. - The `max_in_flight` setting limits the number of component instances the <%= vars.k8s_runtime_abbr %> CLI creates or starts simultaneously + The `max_in_flight` setting limits the number of component instances the TKGI CLI creates or starts simultaneously when running `tkgi create-cluster` or `tkgi update-cluster`. By default, `max_in_flight` is set to `4`, - limiting the <%= vars.k8s_runtime_abbr %> CLI to creating or starting a maximum of four component instances in parallel. + limiting the TKGI CLI to creating or starting a maximum of four component instances in parallel. 1. Click **Save**. diff --git a/_azs-networks-azure.html.md.erb b/_azs-networks-azure.html.md.erb index 63eccb3e5..0d748e146 100644 --- a/_azs-networks-azure.html.md.erb +++ b/_azs-networks-azure.html.md.erb @@ -1,9 +1,9 @@ -To configure the networks used by the <%= vars.product_short %> control plane: +To configure the networks used by the Tanzu Kubernetes Grid Integrated Edition control plane: 1. Click **Assign Networks**. ![Assign Networks pane in Ops Manager](images/azure/azs-networks-azure.png) -1. Under **Network**, select the infrastructure subnet that you created for <%= vars.product_short %> component VMs, such as the <%= vars.control_plane %> and <%= vars.control_plane_db %> VMs. For example, `infrastructure`. +1. Under **Network**, select the infrastructure subnet that you created for Tanzu Kubernetes Grid Integrated Edition component VMs, such as the TKGI API and TKGI Database VMs. For example, `infrastructure`. 1. Under **Service Network**, select the services subnet that you created for Kubernetes cluster VMs. For example, `services`. 1. Click **Save**. diff --git a/_azs-networks.html.md.erb b/_azs-networks.html.md.erb index 247966ec2..69c7aa19f 100644 --- a/_azs-networks.html.md.erb +++ b/_azs-networks.html.md.erb @@ -1,15 +1,15 @@ To configure the availability zones (AZs) and networks -used by the <%= vars.product_short %> control plane: +used by the Tanzu Kubernetes Grid Integrated Edition control plane: 1. Click **Assign AZs and Networks**. 1. Under **Place singleton jobs in**, select the AZ where you want to deploy the -<%= vars.control_plane %> and <%= vars.control_plane_db %>. +TKGI API and TKGI Database. ![Assign AZs and Networks pane in Ops Manager](images/azs-networks.png) -1. Under **Balance other jobs in**, select the AZ for balancing other <%= vars.product_short %> control plane jobs. -

Note: You must specify the Balance other jobs in AZ, but the selection has no effect in the current version of <%= vars.product_short %>. +1. Under **Balance other jobs in**, select the AZ for balancing other Tanzu Kubernetes Grid Integrated Edition control plane jobs. +

Note: You must specify the Balance other jobs in AZ, but the selection has no effect in the current version of Tanzu Kubernetes Grid Integrated Edition.

-1. Under **Network**, select the infrastructure subnet that you created for <%= vars.product_short %> component VMs, such as the <%= vars.control_plane %> and <%= vars.control_plane_db %> VMs. +1. Under **Network**, select the infrastructure subnet that you created for Tanzu Kubernetes Grid Integrated Edition component VMs, such as the TKGI API and TKGI Database VMs. 1. Under **Service Network**, select the services subnet that you created for Kubernetes cluster VMs. 1. Click **Save**. diff --git a/_bbr-supported-components.html.md.erb b/_bbr-supported-components.html.md.erb index 9457791cd..8850733b6 100644 --- a/_bbr-supported-components.html.md.erb +++ b/_bbr-supported-components.html.md.erb @@ -1,6 +1,6 @@ BBR can back up the following components: * BOSH Director -* <%= vars.product_short %> control plane API VM and its ETCD database -* <%= vars.product_short %> control plane database VM (MySQL) -* <%= vars.product_short %> cluster data, from the clusters' ETCD databases +* Tanzu Kubernetes Grid Integrated Edition control plane API VM and its ETCD database +* Tanzu Kubernetes Grid Integrated Edition control plane database VM (MySQL) +* Tanzu Kubernetes Grid Integrated Edition cluster data, from the clusters' ETCD databases diff --git a/_bosh-ssh-api.html.md.erb b/_bosh-ssh-api.html.md.erb index c91c6ec74..4d99cb2a9 100644 --- a/_bosh-ssh-api.html.md.erb +++ b/_bosh-ssh-api.html.md.erb @@ -1,6 +1,6 @@ 1. Log in to the BOSH Director. For instructions, see [Log in to the BOSH Director VM](diagnostic-tools.html#alias). -1. To identify your <%= vars.k8s_runtime_abbr %> deployment name, run the following command: +1. To identify your TKGI deployment name, run the following command: ``` bosh -e ENVIRONMENT deployments @@ -12,10 +12,10 @@ ```console $ bosh -e tkgi deployments ``` - Your <%= vars.k8s_runtime_abbr %> deployment name begins with `pivotal-container-service` and includes + Your TKGI deployment name begins with `pivotal-container-service` and includes a BOSH-generated identifier. -1. To identify your <%= vars.control_plane %> VM name, run the following command: +1. To identify your TKGI API VM name, run the following command: ``` bosh -e ENVIRONMENT -d DEPLOYMENT vms @@ -24,20 +24,20 @@ Where: * `ENVIRONMENT` is the BOSH environment alias. - * `DEPLOYMENT` is your <%= vars.k8s_runtime_abbr %> deployment name. + * `DEPLOYMENT` is your TKGI deployment name. For example: ```console $ bosh -e tkgi -d pivotal-container-service-a1b2c333d444e5f66a77 vms ``` - Your <%= vars.control_plane %> VM name begins with `pivotal-container-service` and includes a + Your TKGI API VM name begins with `pivotal-container-service` and includes a BOSH-generated identifier. -

Note: The <%= vars.control_plane %> VM identifier is different from the identifier in your <%= vars.k8s_runtime_abbr %> +

Note: The TKGI API VM identifier is different from the identifier in your TKGI deployment name.

-1. To SSH into the <%= vars.control_plane %> VM: +1. To SSH into the TKGI API VM: ``` bosh -e ENVIRONMENT -d DEPLOYMENT ssh TKGI-API-VM @@ -46,8 +46,8 @@ Where: * `ENVIRONMENT` is the BOSH environment alias. - * `DEPLOYMENT` is your <%= vars.k8s_runtime_abbr %> deployment name. - * `TKGI-API-VM` is your <%= vars.control_plane %> VM name. + * `DEPLOYMENT` is your TKGI deployment name. + * `TKGI-API-VM` is your TKGI API VM name. For example: ```console diff --git a/_bosh-ssh-db.html.md.erb b/_bosh-ssh-db.html.md.erb index c7ee618a9..bafd1fc03 100644 --- a/_bosh-ssh-db.html.md.erb +++ b/_bosh-ssh-db.html.md.erb @@ -1,6 +1,6 @@ 1. Log in to the BOSH Director. For instructions, see [Log in to the BOSH Director VM](diagnostic-tools.html#alias). -1. To identify your <%= vars.k8s_runtime_abbr %> deployment name: +1. To identify your TKGI deployment name: ``` bosh -e ENVIRONMENT deployments @@ -12,10 +12,10 @@ ```console $ bosh -e tkgi deployments ``` - Your <%= vars.k8s_runtime_abbr %> deployment name begins with `pivotal-container-service` and includes + Your TKGI deployment name begins with `pivotal-container-service` and includes a BOSH-generated identifier. -1. To identify your <%= vars.control_plane_db %> VM names: +1. To identify your TKGI Database VM names: ``` bosh -e ENVIRONMENT -d DEPLOYMENT vms @@ -24,18 +24,18 @@ Where: * `ENVIRONMENT` is the BOSH environment alias. - * `DEPLOYMENT` is your <%= vars.k8s_runtime_abbr %> deployment name. + * `DEPLOYMENT` is your TKGI deployment name. For example: ```console $ bosh -e tkgi -d pivotal-container-service-a1b2c333d444e5f66a77 vms ``` - Your <%= vars.control_plane_db %> VM names begin with `pks-db` and include a + Your TKGI Database VM names begin with `pks-db` and include a BOSH-generated identifier. -1. Choose one of the returned <%= vars.control_plane_db %> VMs as the database VM to SSH into. -1. To SSH into the selected <%= vars.control_plane_db %> VM, run the following command: +1. Choose one of the returned TKGI Database VMs as the database VM to SSH into. +1. To SSH into the selected TKGI Database VM, run the following command: ``` bosh -e ENVIRONMENT -d DEPLOYMENT ssh TKGI-DB-VM @@ -44,8 +44,8 @@ Where: * `ENVIRONMENT` is the BOSH environment alias. - * `DEPLOYMENT` is your <%= vars.k8s_runtime_abbr %> deployment name. - * `TKGI-DB-VM` is the name of the <%= vars.control_plane_db %> VM to SSH into. + * `DEPLOYMENT` is your TKGI deployment name. + * `TKGI-DB-VM` is the name of the TKGI Database VM to SSH into. For example: ```console diff --git a/_cloud-provider.html.md.erb b/_cloud-provider.html.md.erb index 1f8a2f72f..450e0b88f 100644 --- a/_cloud-provider.html.md.erb +++ b/_cloud-provider.html.md.erb @@ -1,4 +1,4 @@ -In the procedure below, you use credentials for vCenter master VMs. You must have provisioned the service account with the correct permissions. For more information, see [Create the Master Node Service Account](vsphere-prepare-env.html#create-master) in _Preparing vSphere Before Deploying <%= vars.product_short %>_. +In the procedure below, you use credentials for vCenter master VMs. You must have provisioned the service account with the correct permissions. For more information, see [Create the Master Node Service Account](vsphere-prepare-env.html#create-master) in _Preparing vSphere Before Deploying Tanzu Kubernetes Grid Integrated Edition_. To configure your Kubernetes cloud provider settings, follow the procedure below: @@ -7,7 +7,7 @@ To configure your Kubernetes cloud provider settings, follow the procedure below vSphere pane configuration 1. Ensure the values in the following procedure match those in the **vCenter Config** section of the Ops Manager tile: 1. Enter your **vCenter Master Credentials**. Enter the vCenter Server user name using the format `user@domainname`, for example: "_user@example.com_". - For more information about the master node service account, see [Preparing vSphere Before Deploying <%= vars.product_short %>](vsphere-prepare-env.html). + For more information about the master node service account, see [Preparing vSphere Before Deploying Tanzu Kubernetes Grid Integrated Edition](vsphere-prepare-env.html).

Warning: The vSphere Container Storage Plug-in will not function if you do not specify the domain name for active directory users.

1. Enter your **vCenter Host**. For example, `vcenter-example.com`.

Note: The FQDN for the vCenter Server cannot contain uppercase letters.

@@ -16,7 +16,7 @@ To configure your Kubernetes cloud provider settings, follow the procedure below Populate **Datastore Name** with the Persistent Datastore name configured in your **BOSH Director** tile under **vCenter Config** > **Persistent Datastore Names**. Enter only a single Persistent datastore in the **Datastore Name** field. - - The vSphere datastore type must be Datastore. <%= vars.product_short %> does not support the use of vSphere Datastore Clusters with or without Storage DRS. For more information, see Datastores and Datastore Clusters in the vSphere documentation. + - The vSphere datastore type must be Datastore. Tanzu Kubernetes Grid Integrated Edition does not support the use of vSphere Datastore Clusters with or without Storage DRS. For more information, see Datastores and Datastore Clusters in the vSphere documentation. - The Datastore Name is the default datastore used if the Kubernetes cluster StorageClass does not define a StoragePolicy. Do not enter a datastore that is a list of BOSH Job/VMDK datastores. For more information, see PersistentVolume Storage Options on vSphere. - For multi-AZ and multi-cluster environments, your Datastore Name must be a shared Persistent datastore available to each vSphere cluster. Do not enter a datastore that is local to a single cluster. For more information, see PersistentVolume Storage Options on vSphere. diff --git a/_cluster-monitoring.html.md.erb b/_cluster-monitoring.html.md.erb index 19aed72b2..7577eb94b 100644 --- a/_cluster-monitoring.html.md.erb +++ b/_cluster-monitoring.html.md.erb @@ -37,7 +37,7 @@ To use Wavefront with Windows worker-based clusters, developers must install Wav To enable and configure Wavefront monitoring: -1. In the <%= vars.product_tile %> tile, select **In-Cluster Monitoring**. +1. In the Tanzu Kubernetes Grid Integrated Edition tile, select **In-Cluster Monitoring**. 1. Under **Wavefront Integration**, select **Yes**. 1. Under **Wavefront URL**, enter the URL of your Wavefront subscription. For example: ```console @@ -47,14 +47,14 @@ To enable and configure Wavefront monitoring: 1. (Optional) For installations that require a proxy server for outbound Internet access, enable access by entering values for **HTTP Proxy Host**, **HTTP Proxy Port**, **Proxy username**, and **Proxy password**. 1. Click **Save**. -The <%= vars.product_tile %> tile does not validate your Wavefront configuration settings. To verify your setup, look for cluster and pod metrics in Wavefront. +The Tanzu Kubernetes Grid Integrated Edition tile does not validate your Wavefront configuration settings. To verify your setup, look for cluster and pod metrics in Wavefront. <% if current_page.data.iaas == "vSphere" || current_page.data.iaas == "vSphere-NSX-T" %> #### VMware vRealize Operations Management Pack for Container Monitoring -You can monitor <%= vars.product_short %> Kubernetes clusters with VMware vRealize Operations Management Pack for Container Monitoring. +You can monitor Tanzu Kubernetes Grid Integrated Edition Kubernetes clusters with VMware vRealize Operations Management Pack for Container Monitoring. -To integrate <%= vars.product_short %> with VMware vRealize Operations Management Pack for Container Monitoring, you must deploy a container running [cAdvisor](https://github.com/google/cadvisor) in your <%= vars.k8s_runtime_abbr %> deployment. +To integrate Tanzu Kubernetes Grid Integrated Edition with VMware vRealize Operations Management Pack for Container Monitoring, you must deploy a container running [cAdvisor](https://github.com/google/cadvisor) in your TKGI deployment. cAdvisor is an open source tool that provides monitoring and statistics for Kubernetes clusters. @@ -64,7 +64,7 @@ To deploy a cAdvisor container: 1. Under **Deploy cAdvisor**, select **Yes**. 1. Click **Save**. -For more information about integrating this type of monitoring with <%= vars.k8s_runtime_abbr %>, see the [VMware vRealize Operations Management Pack for Container Monitoring User Guide](https://docs.vmware.com/en/Management-Packs-for-vRealize-Operations-Manager/1.4/container-monitoring/GUID-BD6B5510-4A16-412D-B5AD-43F74C300C91.html) and [Release Notes](https://docs.vmware.com/en/Management-Packs-for-vRealize-Operations-Manager/1.4/rn/Container-Monitoring-Release-Notes.html) in the VMware documentation. +For more information about integrating this type of monitoring with TKGI, see the [VMware vRealize Operations Management Pack for Container Monitoring User Guide](https://docs.vmware.com/en/Management-Packs-for-vRealize-Operations-Manager/1.4/container-monitoring/GUID-BD6B5510-4A16-412D-B5AD-43F74C300C91.html) and [Release Notes](https://docs.vmware.com/en/Management-Packs-for-vRealize-Operations-Manager/1.4/rn/Container-Monitoring-Release-Notes.html) in the VMware documentation. <% else %> #### cAdvisor @@ -95,11 +95,11 @@ To enable clusters to send Kubernetes node metrics and pod metrics to metric sinks: 1. In **In-Cluster Monitoring**, select **Enable Metric Sink Resources**. -If you enable this check box, <%= vars.product_short %> deploys Telegraf as a +If you enable this check box, Tanzu Kubernetes Grid Integrated Edition deploys Telegraf as a `DaemonSet`, a pod that runs on each worker node in all your Kubernetes clusters. 1. (Optional) To enable Node Exporter to send worker node metrics to metric sinks of kind `ClusterMetricSink`, select **Enable node exporter on workers**. -If you enable this check box, <%= vars.product_short %> deploys Node Exporter as +If you enable this check box, Tanzu Kubernetes Grid Integrated Edition deploys Node Exporter as a `DaemonSet`, a pod that runs on each worker node in all your Kubernetes clusters. @@ -119,7 +119,7 @@ _Monitoring Workers and Workloads_. To enable clusters to send Kubernetes API events and pod logs to log sinks: 1. Select **Enable Log Sink Resources**. If you enable this check box, -<%= vars.product_short %> deploys Fluent Bit as a `DaemonSet`, a pod that runs +Tanzu Kubernetes Grid Integrated Edition deploys Fluent Bit as a `DaemonSet`, a pod that runs on each worker node in all your Kubernetes clusters. 1. (Optional) To increase the Fluent Bit Pod memory limit, enter a value greater than 100 in the **Fluent-bit container memory limit(Mi)** field. diff --git a/_console-usage-data.html.md.erb b/_console-usage-data.html.md.erb index c0e9c62fe..7cb377381 100644 --- a/_console-usage-data.html.md.erb +++ b/_console-usage-data.html.md.erb @@ -14,13 +14,13 @@ To configure VMware's Customer Experience Improvement Program (CEIP), do the fol * Your entitlement account number or Tanzu customer number. If you are a VMware customer, you can find your entitlement account number in your **Account Summary** on [my.vmware.com](https://my.vmware.com). If you are a Pivotal customer, you can find your Pivotal Customer Number in your Pivotal Order Confirmation email. - * A descriptive name for your <%= vars.k8s_runtime_abbr %> installation. + * A descriptive name for your TKGI installation. The label you assign to this installation will be used in the reports to identify the environment. 1. To provide information about the purpose for this installation, select an option. ![CEIP installation type](./images/ceip-telemetry-type_mc.png) 1. Click **Save**. -

Note: If you join the CEIP Program for <%= vars.product_short %>, open your firewall to allow outgoing access to +

Note: If you join the CEIP Program for Tanzu Kubernetes Grid Integrated Edition, open your firewall to allow outgoing access to https://vcsa.vmware.com/ph on port 443.

-

Note: Even if you do not wish to participate in CIEP, <%= vars.product_short %>-provisioned clusters send usage data to the <%= vars.k8s_runtime_abbr %> control plane. - However, this data is not sent to VMware and remains on your <%= vars.product_short %> installation.

+

Note: Even if you do not wish to participate in CIEP, Tanzu Kubernetes Grid Integrated Edition-provisioned clusters send usage data to the TKGI control plane. + However, this data is not sent to VMware and remains on your Tanzu Kubernetes Grid Integrated Edition installation.

diff --git a/_create-auth-token-var.html.md.erb b/_create-auth-token-var.html.md.erb index 9ed6b58ee..224091b4f 100644 --- a/_create-auth-token-var.html.md.erb +++ b/_create-auth-token-var.html.md.erb @@ -6,9 +6,9 @@ ``` Where: - * `TKGI-API` is the FQDN of your <%= vars.control_plane %> endpoint. For example, `api.tkgi.example.com`. - * `USER-ID` is your <%= vars.product_short %> user ID. - * `PASSWORD` is your <%= vars.product_short %> password. + * `TKGI-API` is the FQDN of your TKGI API endpoint. For example, `api.tkgi.example.com`. + * `USER-ID` is your Tanzu Kubernetes Grid Integrated Edition user ID. + * `PASSWORD` is your Tanzu Kubernetes Grid Integrated Edition password. * `YOUR-ACCESS-TOKEN` is the name of your access token environment variable. For example: diff --git a/_errands.html.md.erb b/_errands.html.md.erb index 01a7b720a..01c39f016 100644 --- a/_errands.html.md.erb +++ b/_errands.html.md.erb @@ -1,7 +1,7 @@ Errands are scripts that run at designated points during an installation. To configure which post-deploy and pre-delete errands run for -<%= vars.product_short %>: +Tanzu Kubernetes Grid Integrated Edition: 1. Make a selection in the dropdown next to each errand. <% if current_page.data.iaas == "vSphere-NSX-T" %> @@ -21,39 +21,39 @@ To configure which post-deploy and pre-delete errands run for <% end %> 1. (Optional) Set the **Run smoke tests** errand to **On**. - The Smoke Test errand smoke tests the <%= vars.k8s_runtime_abbr %> upgrade by creating and deleting a test Kubernetes cluster. + The Smoke Test errand smoke tests the TKGI upgrade by creating and deleting a test Kubernetes cluster. If test cluster creation or deletion fails, the errand fails, and the installation of the - <%= vars.k8s_runtime_abbr %> tile halts. + TKGI tile halts. <% if current_page.data.iaas == "vSphere-NSX-T" %> - The errand uses the <%= vars.k8s_runtime_abbr %> CLI to create the test cluster configured using either - the configuration settings on the <%= vars.k8s_runtime_abbr %> tile - the default, or a network profile. + The errand uses the TKGI CLI to create the test cluster configured using either + the configuration settings on the TKGI tile - the default, or a network profile. -1. (Optional) To configure the Smoke Test errand to use a network profile instead of the default configuration settings on the <%= vars.k8s_runtime_abbr %> tile: +1. (Optional) To configure the Smoke Test errand to use a network profile instead of the default configuration settings on the TKGI tile: * Create a network profile with your preferred smoke test settings. * Configure **Errand Settings** > **Smoke tests - Network Profile Name** with the network profile name. Smoke Test cluster network profile assignment in the Smoke tests - Network Profile Name field. <% else %> - The errand uses the <%= vars.k8s_runtime_abbr %> CLI to create the test cluster configured using - the configuration settings on the <%= vars.k8s_runtime_abbr %> tile. + The errand uses the TKGI CLI to create the test cluster configured using + the configuration settings on the TKGI tile. <% end %> 1. (Optional) To ensure that all of your cluster VMs are patched, configure the **Upgrade all clusters errand** errand to **On**. <% if vars.product_version == "v1.17" %> -

Warning: If you have <%= vars.k8s_runtime_abbr %>-provisioned Windows worker clusters, - do not activate the Upgrade all clusters errand before upgrading to the <%= vars.k8s_runtime_abbr %> v1.17 tile. +

Warning: If you have TKGI-provisioned Windows worker clusters, + do not activate the Upgrade all clusters errand before upgrading to the TKGI v1.17 tile. You cannot use the Upgrade all clusters errand because you must manually migrate each individual Windows worker cluster to the CSI Driver for vSphere. For more information, see Configure vSphere CSI for Windows in Deploying and Managing Cloud Native Storage (CNS) on vSphere.

<% end %>
- Updating the <%= vars.product_tile %> tile with a new + Updating the Tanzu Kubernetes Grid Integrated Edition tile with a new Linux stemcell and the **Upgrade all clusters errand** enabled triggers the rolling of every Linux VM in each Kubernetes cluster. - Similarly, updating the <%= vars.product_tile %> tile with a new Windows stemcell triggers + Similarly, updating the Tanzu Kubernetes Grid Integrated Edition tile with a new Windows stemcell triggers the rolling of every Windows VM in your Kubernetes clusters.

Note: <%= vars.recommended_by %> recommends that you review the VMware Tanzu Network metadata and confirm stemcell version compatibility before using diff --git a/_global-proxy.html.md.erb b/_global-proxy.html.md.erb index 3b138f1e4..e75be9dce 100644 --- a/_global-proxy.html.md.erb +++ b/_global-proxy.html.md.erb @@ -1,15 +1,15 @@
Networking pane configuration
-1. (Optional) Configure <%= vars.product_short %> to use a proxy. +1. (Optional) Configure Tanzu Kubernetes Grid Integrated Edition to use a proxy.

Production environments can deny direct access to public Internet services and between internal services by placing an HTTP or HTTPS proxy in the network path between Kubernetes nodes and those services.
-Configure <%= vars.product_short %> to use your proxy and activate the following: - * <%= vars.control_plane %> access to public Internet services and other internal services. - * <%= vars.product_short %>-deployed Kubernetes nodes access to public Internet services and other internal services. - * <%= vars.product_short %> Telemetry ability to forward Telemetry data to the CEIP and Telemetry program. +Configure Tanzu Kubernetes Grid Integrated Edition to use your proxy and activate the following: + * TKGI API access to public Internet services and other internal services. + * Tanzu Kubernetes Grid Integrated Edition-deployed Kubernetes nodes access to public Internet services and other internal services. + * Tanzu Kubernetes Grid Integrated Edition Telemetry ability to forward Telemetry data to the CEIP and Telemetry program.

Note: This setting does not set the proxy for running Kubernetes workloads or pods.

1. To complete your global proxy configuration for all outgoing HTTP/HTTPS traffic from your Kubernetes clusters, perform the following steps: @@ -27,16 +27,16 @@ Configure <%= vars.product_short %> to use your proxy and activate the following 1. (Optional) If your HTTPS proxy uses basic authentication, enter the user name and password in the **HTTPS Proxy Credentials** fields. 1. Under **No Proxy**, enter the comma-separated list of IP addresses that must bypass the proxy to - allow for internal <%= vars.product_short %> communication. + allow for internal Tanzu Kubernetes Grid Integrated Edition communication.
Include `127.0.0.1` and `localhost` in the **No Proxy** list.
Also include the following in the **No Proxy** list: - * Your <%= vars.product_short %> environment's CIDRs, such as - the service network CIDR where your <%= vars.product_short %> cluster is deployed, + * Your Tanzu Kubernetes Grid Integrated Edition environment's CIDRs, such as + the service network CIDR where your Tanzu Kubernetes Grid Integrated Edition cluster is deployed, the deployment network CIDR, the node network IP block CIDR, and the pod network IP block CIDR.
- * The FQDN of any registry, such as the Harbor API FQDN, or component communicating with <%= vars.product_short %>, using a hostname + * The FQDN of any registry, such as the Harbor API FQDN, or component communicating with Tanzu Kubernetes Grid Integrated Edition, using a hostname instead of an IP address.
<% if current_page.data.topic=="proxies-nsx-t" || current_page.data.iaas == "vSphere" %> @@ -78,7 +78,7 @@ Configure <%= vars.product_short %> to use your proxy and activate the following 169.254.169.254, 10.100.0.0/8 and 10.200.0.0/8 IP address ranges, .internal, .svc,.svc.cluster.local, .svc.cluster, - and your <%= vars.product_short %> FQDN are not proxied. This allows internal <%= vars.product_short %> communication. + and your Tanzu Kubernetes Grid Integrated Edition FQDN are not proxied. This allows internal Tanzu Kubernetes Grid Integrated Edition communication.

Do not use the _ character in the No Proxy field. Entering an underscore character in this field can cause upgrades to fail. @@ -93,7 +93,7 @@ Configure <%= vars.product_short %> to use your proxy and activate the following 10.100.0.0/8 and 10.200.0.0/8 IP address ranges, .internal, .svc,.svc.cluster.local, .svc.cluster, - and your <%= vars.product_short %> FQDN are not proxied. This allows internal <%= vars.product_short %> communication. + and your Tanzu Kubernetes Grid Integrated Edition FQDN are not proxied. This allows internal Tanzu Kubernetes Grid Integrated Edition communication.

Do not use the _ character in the No Proxy field. Entering an underscore character in this field can cause upgrades to fail. diff --git a/_harbor.html.md.erb b/_harbor.html.md.erb index 4e69a3711..0f1129f0f 100644 --- a/_harbor.html.md.erb +++ b/_harbor.html.md.erb @@ -1 +1 @@ -Integrate VMware Harbor with <%= vars.product_short %> to store and manage container images. For more information, see [Integrating VMware Harbor Registry with <%= vars.product_short %>](https://docs.vmware.com/en/VMware-Harbor-Registry/services/vmware-harbor-registry/GUID-integrating-pks.html). +Integrate VMware Harbor with Tanzu Kubernetes Grid Integrated Edition to store and manage container images. For more information, see [Integrating VMware Harbor Registry with Tanzu Kubernetes Grid Integrated Edition](https://docs.vmware.com/en/VMware-Harbor-Registry/services/vmware-harbor-registry/GUID-integrating-pks.html). diff --git a/_host-monitoring.html.md.erb b/_host-monitoring.html.md.erb index 4b61d91f8..76274ccce 100644 --- a/_host-monitoring.html.md.erb +++ b/_host-monitoring.html.md.erb @@ -24,18 +24,18 @@ You can configure one or more of the following: * **VMware vRealize Log Insight (vRLI) Integration**: To configure VMware vRealize Log Insight (vRLI) Integration, see [VMware vRealize Log Insight Integration](#vrealize-logs) below. The vRLI integration pulls logs from all BOSH jobs and containers running in the cluster, including node logs from core Kubernetes and BOSH processes, Kubernetes event logs, and pod `stdout` and `stderr`. <% end %> -* **Telegraf**: To configure Telegraf, see [Configuring Telegraf in <%= vars.k8s_runtime_abbr %>](monitor-etcd.html). The Telegraf agent sends metrics from TKGI API, control plane node, and worker node VMs to a monitoring service, such as Wavefront or Datadog. +* **Telegraf**: To configure Telegraf, see [Configuring Telegraf in TKGI](monitor-etcd.html). The Telegraf agent sends metrics from TKGI API, control plane node, and worker node VMs to a monitoring service, such as Wavefront or Datadog. For more information about these components, see -[Monitoring <%= vars.k8s_runtime_abbr %> and <%= vars.k8s_runtime_abbr %>-Provisioned Clusters](host-monitoring.html). +[Monitoring TKGI and TKGI-Provisioned Clusters](host-monitoring.html). #### Syslog -To configure Syslog for all BOSH-deployed VMs in <%= vars.product_short %>: +To configure Syslog for all BOSH-deployed VMs in Tanzu Kubernetes Grid Integrated Edition: 1. Click **Host Monitoring**. -1. Under **Enable Syslog for <%= vars.k8s_runtime_abbr %>**, select **Yes**. +1. Under **Enable Syslog for TKGI**, select **Yes**. 1. Under **Address**, enter the destination syslog endpoint. 1. Under **Port**, enter the destination syslog port. 1. Under **Transport Protocol**, select a transport protocol for log forwarding. diff --git a/_increase_persistent_disk.html.md.erb b/_increase_persistent_disk.html.md.erb index 17fd15b22..a1e6ebd6a 100644 --- a/_increase_persistent_disk.html.md.erb +++ b/_increase_persistent_disk.html.md.erb @@ -2,7 +2,7 @@ ### Storage Requirements for Large Numbers of Pods If you expect the cluster workload to run a large number of pods continuously, -then increase the size of persistent disk storage allocated to the <%= vars.control_plane_db %> VM as follows: +then increase the size of persistent disk storage allocated to the TKGI Database VM as follows: diff --git a/_install-cli.html.md.erb b/_install-cli.html.md.erb index dbba69df5..063ceede5 100644 --- a/_install-cli.html.md.erb +++ b/_install-cli.html.md.erb @@ -1,5 +1,5 @@ -The <%= vars.k8s_runtime_abbr %> CLI and the Kubernetes CLI help you interact with your <%= vars.product_short %>-provisioned Kubernetes clusters and Kubernetes workloads. To install the CLIs, follow the instructions below: +The TKGI CLI and the Kubernetes CLI help you interact with your Tanzu Kubernetes Grid Integrated Edition-provisioned Kubernetes clusters and Kubernetes workloads. To install the CLIs, follow the instructions below: -* [Installing the <%= vars.k8s_runtime_abbr %> CLI](installing-cli.html) +* [Installing the TKGI CLI](installing-cli.html) * [Installing the Kubernetes CLI](installing-kubectl-cli.html) diff --git a/_install.html.md.erb b/_install.html.md.erb index 68e43b772..5d3c382bf 100644 --- a/_install.html.md.erb +++ b/_install.html.md.erb @@ -1,6 +1,6 @@ -To install <%= vars.product_short %>, do the following: +To install Tanzu Kubernetes Grid Integrated Edition, do the following: 1. Download the product file from [VMware Tanzu Network](https://network.pivotal.io). 1. Navigate to `https://YOUR-OPS-MANAGER-FQDN/` in a browser to log in to the Ops Manager Installation Dashboard. 1. Click **Import a Product** to upload the product file. -1. Under **<%= vars.product_tile %>** in the left column, click the plus sign to add this product to your staging area. +1. Under **Tanzu Kubernetes Grid Integrated Edition** in the left column, click the plus sign to add this product to your staging area. diff --git a/_k8s-profiles-uses.html.md.erb b/_k8s-profiles-uses.html.md.erb index 9f3af7799..900d51098 100644 --- a/_k8s-profiles-uses.html.md.erb +++ b/_k8s-profiles-uses.html.md.erb @@ -31,7 +31,7 @@ - +
Configure Pod Security Admission.Configure cluster-specific PSA in <%= vars.k8s_runtime_abbr %>. For more information, see Pod Security Admission in a <%= vars.k8s_runtime_abbr %> Cluster in Pod Security Admission in <%= vars.k8s_runtime_abbr %>.Configure cluster-specific PSA in TKGI. For more information, see Pod Security Admission in a TKGI Cluster in Pod Security Admission in TKGI.
@@ -131,12 +131,7 @@ instructions. ``` Where `IP-RANGE` is a CIDR notation IP range from which to assign service cluster IPs. - The IP range can be a maximum of two dual-stack CIDRs and must not overlap with any IP ranges assigned to nodes or pods. - <% if vars.product_version == "COMMENTED" %> - The specified range must not overlap with any IP ranges assigned to nodes or pods. - You can specify a single CIDR range or two dual-stack CIDR ranges. - <% end %> - + The IP range can be a maximum of two dual-stack CIDRs and must not overlap with any IP ranges assigned to nodes or pods. For more information, see kube-apiserver [Options](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/#options) in the Kubernetes documentation. diff --git a/_lb-resource-config.html.md.erb b/_lb-resource-config.html.md.erb index 5a7f7e61b..38bdadf37 100644 --- a/_lb-resource-config.html.md.erb +++ b/_lb-resource-config.html.md.erb @@ -1,3 +1,3 @@

Note: After you click Apply Changes for the first time, -BOSH assigns the <%= vars.control_plane %> VM an IP address. BOSH uses the name you provide in the LOAD BALANCERS field -to locate your load balancer and then connect the load balancer to the <%= vars.control_plane %> VM using its new IP address.

+BOSH assigns the TKGI API VM an IP address. BOSH uses the name you provide in the LOAD BALANCERS field +to locate your load balancer and then connect the load balancer to the TKGI API VM using its new IP address.

diff --git a/_login-api.html.md.erb b/_login-api.html.md.erb index bd6574a41..73e980708 100644 --- a/_login-api.html.md.erb +++ b/_login-api.html.md.erb @@ -4,9 +4,9 @@ ``` Where: - * `TKGI-API` is the domain name for the <%= vars.control_plane %> that you entered in **Ops Manager** > **<%= vars.product_tile %>** > **<%= vars.control_plane %>** > **API Hostname (FQDN)**. + * `TKGI-API` is the domain name for the TKGI API that you entered in **Ops Manager** > **Tanzu Kubernetes Grid Integrated Edition** > **TKGI API** > **API Hostname (FQDN)**. For example, `api.tkgi.example.com`. * `USERNAME` is your user name.

- See [Logging in to <%= vars.product_short %>](login.html) for more information about the `tkgi login` command. + See [Logging in to Tanzu Kubernetes Grid Integrated Edition](login.html) for more information about the `tkgi login` command. <%= partial "saml-sso-login" %> diff --git a/_networking-vsphere.html.md.erb b/_networking-vsphere.html.md.erb index c9655a9c5..5a329031e 100644 --- a/_networking-vsphere.html.md.erb +++ b/_networking-vsphere.html.md.erb @@ -8,7 +8,7 @@ To configure networking, do the following: Networking pane configuration

Note: - Antrea is not supported for the <%= vars.k8s_runtime_abbr %> Windows-worker on vSphere without NSX beta feature.

+ Antrea is not supported for the TKGI Windows-worker on vSphere without NSX beta feature.

1. (Optional) Enter values for **Kubernetes Pod Network CIDR Range** and **Kubernetes Service Network CIDR Range**. * For Windows worker-based clusters the **Kubernetes Service Network CIDR Range** setting must be `10.220.0.0/16`.

Note: vSphere on Flannel does not support networking Windows containers. @@ -24,7 +24,7 @@ To configure networking, do the following: <%= vars.recommended_by %> recommends that you switch your Flannel CNI-configured clusters to the Antrea CNI. For more information about Flannel CNI deprecation, see About Switching from the Flannel CNI to the Antrea CNI - in About <%= vars.product_short %> Upgrades. + in About Tanzu Kubernetes Grid Integrated Edition Upgrades.

1. (Optional) Enter values for **Kubernetes Pod Network CIDR Range** and **Kubernetes Service Network CIDR Range**. * Ensure that the CIDR ranges do not overlap and have sufficient space for your deployed services. diff --git a/_nsx-t-ingress-lb-overview.html.md.erb b/_nsx-t-ingress-lb-overview.html.md.erb index 6162f8f23..4e39032f0 100644 --- a/_nsx-t-ingress-lb-overview.html.md.erb +++ b/_nsx-t-ingress-lb-overview.html.md.erb @@ -2,17 +2,17 @@ The NSX Load Balancer is a logical load balancer that handles a number of functions using virtual servers and pools. The NSX load balancer creates a load balancer service for each Kubernetes cluster provisioned -by <%= vars.product_short %> with NSX. For each load balancer service, NCP, by way of the Kubernetes CustomResourceDefinition (CRD), +by Tanzu Kubernetes Grid Integrated Edition with NSX. For each load balancer service, NCP, by way of the Kubernetes CustomResourceDefinition (CRD), creates corresponding NSXLoadBalancerMonitor objects. -By default <%= vars.product_short %> deploys the following NSX virtual servers for each Kubernetes cluster: +By default Tanzu Kubernetes Grid Integrated Edition deploys the following NSX virtual servers for each Kubernetes cluster: * One TCP layer 4 load balancer virtual server for the Kubernetes API server. * One TCP layer 4 auto-scaled load balancer virtual server for **each** Kubernetes service resource of `type: LoadBalancer`. * Two HTTP/HTTPS layer 7 ingress routing virtual servers. These virtual server are attached to the Kubernetes Ingress Controller cluster load balancer service and can be manually scaled. -<%= vars.product_short %> uses Kubernetes custom resources to +Tanzu Kubernetes Grid Integrated Edition uses Kubernetes custom resources to monitor the state of the NSX load balancer service and scale the virtual servers created for ingress. <% if current_page.data.lbtype == "monitor" %> diff --git a/_other-super-certificates.html.md.erb b/_other-super-certificates.html.md.erb index 608f3c341..670717df8 100644 --- a/_other-super-certificates.html.md.erb +++ b/_other-super-certificates.html.md.erb @@ -2,24 +2,24 @@ To create, delete, and modify NSX networking resources, <%= vars.platform_name % Users configure <%= vars.platform_name %> to authenticate to NSX Manager for different purposes in different tiles: -* **<%= vars.product_short %> tile**
- The <%= vars.product_tile %> tile uses NSX Manager to create load balancers, +* **Tanzu Kubernetes Grid Integrated Edition tile**
+ The Tanzu Kubernetes Grid Integrated Edition tile uses NSX Manager to create load balancers, providing a Kubernetes service described in the [Create an External Load Balancer](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/) section of the Kubernetes documentation.

<% if current_page.data.authenttype == "pkstile" %> - To configure the **<%= vars.product_tile %>** tile's authentication to NSX Manager, see + To configure the **Tanzu Kubernetes Grid Integrated Edition** tile's authentication to NSX Manager, see [About the NSX Manager Superuser Principal Identity](#certificates-nsx-pid-about), below.

<% end %> <% if current_page.data.authenttype == "boshtile" %> - To configure the **<%= vars.product_tile %>** tile's authentication to NSX Manager, see + To configure the **Tanzu Kubernetes Grid Integrated Edition** tile's authentication to NSX Manager, see the topic [Generating and Registering the NSX Manager Superuser Principal Identity Certificate and Key](nsxt-generate-pi-cert.html).

<% end %> * **BOSH Director for vSphere tile**
The **BOSH Director for vSphere** tile uses NSX Manager to configure networking and security for external-facing <%= vars.platform_name %> component VMs, such as <%= vars.app_runtime_full %> routers.

<% if current_page.data.authenttype == "pkstile" %> To configure the **BOSH Director for vSphere** tile's authentication to NSX Manager, see - [Configure vCenter for <%= vars.product_short %>](vsphere-nsxt-om-config.html#vcenter-config) in _Configuring BOSH Director with NSX for <%= vars.product_short %>_.

+ [Configure vCenter for Tanzu Kubernetes Grid Integrated Edition](vsphere-nsxt-om-config.html#vcenter-config) in _Configuring BOSH Director with NSX for Tanzu Kubernetes Grid Integrated Edition_.

<% end %> <% if current_page.data.authenttype == "boshtile" %> To configure the **BOSH Director for vSphere** tile's authentication to NSX Manager, see - [Configure vCenter for <%= vars.product_short %>](#vcenter-config), below.
+ [Configure vCenter for Tanzu Kubernetes Grid Integrated Edition](#vcenter-config), below.
<% end %> diff --git a/_plans.html.md.erb b/_plans.html.md.erb index e72628258..e6377be12 100644 --- a/_plans.html.md.erb +++ b/_plans.html.md.erb @@ -2,7 +2,7 @@ A plan defines a set of resource types used for deploying a cluster. #### Activate a Plan

Note: Before configuring your Windows worker plan, you must first activate and configure Plan 1. -See Plans in Installing <%= vars.product_short %> on vSphere with NSX for more information. +See Plans in Installing Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX for more information.

<% else %> A plan defines a set of resource types used for deploying a cluster. @@ -54,7 +54,7 @@ You must activate and configure either **Plan 11**, **Plan 12**, or **Plan 13** <% end %> 1. Under **Name**, provide a unique name for the plan. 1. Under **Description**, edit the description as needed. -The plan description appears in the Services Marketplace, which developers can access by using the <%= vars.k8s_runtime_abbr %> CLI. +The plan description appears in the Services Marketplace, which developers can access by using the TKGI CLI. <% if current_page.data.windowsclusters == true %> 1. Select **Enable HA Linux workers** to activate high availability Linux worker clusters. A high availability Linux worker cluster consists of three Linux worker nodes. @@ -65,19 +65,19 @@ A high availability Linux worker cluster consists of three Linux worker nodes. You can enter 1, 3, or 5.

Note: If you deploy a cluster with multiple control plane/etcd node VMs, confirm that you have sufficient hardware to handle the increased load on disk write and network traffic. For more information, see Hardware recommendations in the etcd documentation.

- In addition to meeting the hardware requirements for a multi-control plane node cluster, we recommend configuring monitoring for etcd to monitor disk latency, network latency, and other indicators for the health of the cluster. For more information, see Configuring Telegraf in <%= vars.k8s_runtime_abbr %>.

-

WARNING: To change the number of control plane/etcd nodes for a plan, you must ensure that no existing clusters use the plan. <%= vars.product_short %> does not support changing the number of control plane/etcd nodes for plans with existing clusters. + In addition to meeting the hardware requirements for a multi-control plane node cluster, we recommend configuring monitoring for etcd to monitor disk latency, network latency, and other indicators for the health of the cluster. For more information, see Configuring Telegraf in TKGI.

+

WARNING: To change the number of control plane/etcd nodes for a plan, you must ensure that no existing clusters use the plan. Tanzu Kubernetes Grid Integrated Edition does not support changing the number of control plane/etcd nodes for plans with existing clusters.

-1. Under **Master/ETCD VM Type**, select the type of VM to use for Kubernetes control plane/etcd nodes. For more information, including control plane node VM customization options, see the [Control Plane Node VM Size](vm-sizing.html#master-sizing) section of _VM Sizing for <%= vars.product_short %> Clusters_. +1. Under **Master/ETCD VM Type**, select the type of VM to use for Kubernetes control plane/etcd nodes. For more information, including control plane node VM customization options, see the [Control Plane Node VM Size](vm-sizing.html#master-sizing) section of _VM Sizing for Tanzu Kubernetes Grid Integrated Edition Clusters_. 1. Under **Master Persistent Disk Type**, select the size of the persistent disk for the Kubernetes control plane node VM. -1. Under **Master/ETCD Availability Zones**, select one or more AZs for the Kubernetes clusters deployed by <%= vars.product_short %>. -If you select more than one AZ, <%= vars.product_short %> deploys the control plane VM in the first AZ and the worker VMs across the remaining AZs. -If you are using multiple control plane nodes, <%= vars.product_short %> deploys the control plane and worker VMs across the AZs in round-robin fashion. -

Note: <%= vars.product_short %> does not support changing the AZs of existing control plane nodes.

+1. Under **Master/ETCD Availability Zones**, select one or more AZs for the Kubernetes clusters deployed by Tanzu Kubernetes Grid Integrated Edition. +If you select more than one AZ, Tanzu Kubernetes Grid Integrated Edition deploys the control plane VM in the first AZ and the worker VMs across the remaining AZs. +If you are using multiple control plane nodes, Tanzu Kubernetes Grid Integrated Edition deploys the control plane and worker VMs across the AZs in round-robin fashion. +

Note: Tanzu Kubernetes Grid Integrated Edition does not support changing the AZs of existing control plane nodes.

1. Under **Maximum number of workers on a cluster**, set the maximum number of -Kubernetes worker node VMs that <%= vars.product_short %> can deploy for each cluster. Enter any whole number in this field. +Kubernetes worker node VMs that Tanzu Kubernetes Grid Integrated Edition can deploy for each cluster. Enter any whole number in this field.
<% if current_page.data.windowsclusters == true %> ![Plan pane configuration, part two](images/plan2-win.png) @@ -85,7 +85,7 @@ Kubernetes worker node VMs that <%= vars.product_short %> can deploy for each cl ![Plan pane configuration, part two](images/plan2.png) <% end %>
-1. Under **Worker Node Instances**, specify the default number of Kubernetes worker nodes the <%= vars.k8s_runtime_abbr %> CLI provisions for each cluster. +1. Under **Worker Node Instances**, specify the default number of Kubernetes worker nodes the TKGI CLI provisions for each cluster. The **Worker Node Instances** setting must be less than, or equal to, the **Maximum number of workers on a cluster** setting.
For high availability, create clusters with a minimum of three worker nodes, or two per AZ if you intend to use PersistentVolumes (PVs). For example, if you deploy across three AZs, you must have six worker nodes. For more information about PVs, see [PersistentVolumes](maintain-uptime.html#persistent-volumes) in *Maintaining Workload Uptime*. Provisioning a minimum of three worker nodes, or two nodes per AZ is also recommended for stateless workloads. @@ -94,15 +94,15 @@ Kubernetes worker node VMs that <%= vars.product_short %> can deploy for each cl

Note: Changing a plan's Worker Node Instances setting does not alter the number of worker nodes on existing clusters. For information about scaling an existing cluster, see - [Scale Horizontally by Changing the Number of Worker Nodes Using the <%= vars.k8s_runtime_abbr %> CLI](scale-clusters.html#scale-horizontal) + [Scale Horizontally by Changing the Number of Worker Nodes Using the TKGI CLI](scale-clusters.html#scale-horizontal) in _Scaling Existing Clusters_.

1. Under **Worker VM Type**, select the type of VM to use for Kubernetes worker node VMs. For more information, including worker node VM customization options, -see [Worker Node VM Number and Size](vm-sizing.html#worker-sizing) in _VM Sizing for <%= vars.product_short %> Clusters_. +see [Worker Node VM Number and Size](vm-sizing.html#worker-sizing) in _VM Sizing for Tanzu Kubernetes Grid Integrated Edition Clusters_. <% if current_page.data.iaas != "GCP" %>

Note: - <%= vars.product_short %> requires a Worker VM Type with an ephemeral disk size of 32 GB or more. + Tanzu Kubernetes Grid Integrated Edition requires a Worker VM Type with an ephemeral disk size of 32 GB or more.

<% end %> <% if current_page.data.windowsclusters == true %> @@ -114,7 +114,7 @@ see [Worker Node VM Number and Size](vm-sizing.html#worker-sizing) in _VM Sizing 1. Under **Worker Persistent Disk Type**, select the size of the persistent disk for the Kubernetes worker node VMs. <% end %> -1. Under **Worker Availability Zones**, select one or more AZs for the Kubernetes worker nodes. <%= vars.product_short %> deploys worker nodes equally across the AZs you select. +1. Under **Worker Availability Zones**, select one or more AZs for the Kubernetes worker nodes. Tanzu Kubernetes Grid Integrated Edition deploys worker nodes equally across the AZs you select. 1. Under **Kubelet customization - system-reserved**, enter resource values that Kubelet can use to reserve resources for system daemons. For example, `memory=250Mi, cpu=150m`. For more information about system-reserved values, @@ -129,7 +129,7 @@ see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-c <% if current_page.data.windowsclusters == true %> 1. Under **Kubelet customization - Windows pause image location**, enter the location of your Windows pause image. The **Kubelet customization - Windows pause image location** default value of `mcr.microsoft.com/k8s/core/pause:3.6` -configures <%= vars.product_short %> to pull the Windows pause image from the Microsoft Docker registry. +configures Tanzu Kubernetes Grid Integrated Edition to pull the Windows pause image from the Microsoft Docker registry.
The Microsoft Docker registry cannot be accessed from within air-gapped environments. If you want to deploy Windows pods in an air-gapped environment you must upload a Windows pause image to an accessible private registry, and configure the **Kubelet customization - @@ -175,14 +175,14 @@ in the Kubernetes documentation. <% if vars.product_version == "v1.18" %> 1. (Optional) Activate or deactivate the **SecurityContextDeny** admission controller plugin. For more information see <% if current_page.data.windowsclusters == true %> -[Using Admission Control Plugins for <%= vars.product_short %> Clusters](./admission-plugins.html). +[Using Admission Control Plugins for Tanzu Kubernetes Grid Integrated Edition Clusters](./admission-plugins.html). See API compatibility in the Kubernetes documentation for additional information. <% else %> -[Using Admission Control Plugins for <%= vars.product_short %> Clusters](./admission-plugins.html). +[Using Admission Control Plugins for Tanzu Kubernetes Grid Integrated Edition Clusters](./admission-plugins.html). <% end %> -

Note: Support for SecurityContextDeny admission controller has been removed in <%= vars.k8s_runtime_abbr %> v1.18. The SecurityContextDeny admission controller has been deprecated, and the Kubernetes community recommends the controller not be used. - Pod security admission (PSA) is the preferred method for providing a more secure Kubernetes environment. For more information about PSA, see Pod Security Admission in <%= vars.k8s_runtime_abbr %>. +

Note: Support for SecurityContextDeny admission controller has been removed in TKGI v1.18. The SecurityContextDeny admission controller has been deprecated, and the Kubernetes community recommends the controller not be used. + Pod security admission (PSA) is the preferred method for providing a more secure Kubernetes environment. For more information about PSA, see Pod Security Admission in TKGI.

<% end %> <% if current_page.data.windowsclusters != true %> diff --git a/_ports-protocols-sphere.html.md.erb b/_ports-protocols-sphere.html.md.erb index 34388c489..cbd3a1552 100644 --- a/_ports-protocols-sphere.html.md.erb +++ b/_ports-protocols-sphere.html.md.erb @@ -93,7 +93,7 @@ The following table lists ports and protocols used for network communication bet | vRealize Operations Manager | NSX API VIP | TCP | 443 | HTTPS| <% else %> <% end %> -| vRealize Operations Manager | <%= vars.k8s_runtime_abbr %> Controller | TCP | 8443 | HTTPSCA| +| vRealize Operations Manager | TKGI Controller | TCP | 8443 | HTTPSCA| | vRealize Operations Manager | Kubernetes Cluster API Server -LB VIP | TCP | 8443 | HTTPSCA| | Admin/Operator Console | vRealize LogInsight | TCP | 443 | HTTPS| | Kubernetes Cluster Ingress Controller | vRealize LogInsight | TCP | 9000 | ingestion api| @@ -105,11 +105,11 @@ The following table lists ports and protocols used for network communication bet | NSX Manager/Controller Node | vRealize LogInsight | TCP | 9000 | ingestion api| <% else %> <% end %> -| <%= vars.k8s_runtime_abbr %> Controller | vRealize LogInsight | TCP | 9000 | ingestion api| +| TKGI Controller | vRealize LogInsight | TCP | 9000 | ingestion api| | Admin/Operator and Developer Consoles | Wavefront SaaS APM | TCP | 443 | HTTPS| | kube-system pod/wavefront-proxy | Wavefront SaaS APM | TCP | 443 | HTTPS| | kube-system pod/wavefront-proxy | Wavefront SaaS APM | TCP | 8443 | HTTPSCA| -| pks-system pod/wavefront-collector | <%= vars.k8s_runtime_abbr %> Controller | TCP | 24224 | Fluentd out_forward| +| pks-system pod/wavefront-collector | TKGI Controller | TCP | 24224 | Fluentd out_forward| | Admin/Operator Console | vRealize Network Insight Platform | TCP | 443 | HTTPS| | Admin/Operator Console | vRealize Network Insight Proxy | TCP | 22 | SSH| | vRealize Network Insight Proxy | Kubernetes Cluster API Server -LB VIP | TCP | 8443 | HTTPSCA| @@ -118,5 +118,5 @@ The following table lists ports and protocols used for network communication bet | vRealize Network Insight Proxy | NSX API VIP | TCP | 443 | HTTPS| <% else %> <% end %> -| vRealize Network Insight Proxy | <%= vars.k8s_runtime_abbr %> Controller | TCP | 8443 | HTTPSCA| -| vRealize Network Insight Proxy | <%= vars.k8s_runtime_abbr %> Controller | TCP | 9021 | TKGI API server| +| vRealize Network Insight Proxy | TKGI Controller | TCP | 8443 | HTTPSCA| +| vRealize Network Insight Proxy | TKGI Controller | TCP | 9021 | TKGI API server| diff --git a/_ports-protocols.html.md.erb b/_ports-protocols.html.md.erb index e0c0283fe..1f2050f7a 100644 --- a/_ports-protocols.html.md.erb +++ b/_ports-protocols.html.md.erb @@ -1,25 +1,25 @@

-## <%= vars.k8s_runtime_abbr %> Ports and Protocols +## TKGI Ports and Protocols <% if current_page.data.netenv == "nsxt" %> -The following tables list ports and protocols required for network communications between <%= vars.product_short %> v1.5.0 +The following tables list ports and protocols required for network communications between Tanzu Kubernetes Grid Integrated Edition v1.5.0 and later, and vSphere 6.7 and NSX-T or NSX 2.4.0.1 and later. <% end %> <% if current_page.data.netenv == "vsphere" %> -The following tables list ports and protocols required for network communications between <%= vars.product_short %> v1.5.0 +The following tables list ports and protocols required for network communications between Tanzu Kubernetes Grid Integrated Edition v1.5.0 and later, and vSphere 6.7 and later. <% end %> <% if current_page.data.netenv == "vsphere" || current_page.data.netenv == "nsxt" %> <% else %> -The following tables list ports and protocols required for network communications between <%= vars.product_short %> v1.5.0 +The following tables list ports and protocols required for network communications between Tanzu Kubernetes Grid Integrated Edition v1.5.0 and later, and other components. <% end %>
-### <%= vars.k8s_runtime_abbr %> Users Ports and Protocols +### TKGI Users Ports and Protocols -The following table lists ports and protocols used for network communication between <%= vars.k8s_runtime_abbr %> user interface components. +The following table lists ports and protocols used for network communication between TKGI user interface components. | Source Component | Destination Component | Destination Protocol | Destination Port | Service | | --- | --- | --- | --- | --- | @@ -33,7 +33,7 @@ The following table lists ports and protocols used for network communication bet <% end %> | Admin/Operator Console | Ops Manager | TCP | 22 | SSH | | Admin/Operator Console | Ops Manager | TCP | 443 | HTTPS | -| Admin/Operator Console | <%= vars.k8s_runtime_abbr %> Controller | TCP | 9021 | TKGI API Server | +| Admin/Operator Console | TKGI Controller | TCP | 9021 | TKGI API Server | <% if current_page.data.netenv == "nsxt" || current_page.data.netenv == "vsphere" %> | Admin/Operator Console | vCenter Server | TCP | 443 | HTTPS | | Admin/Operator Console | vCenter Server | TCP | 5480 | vami | @@ -48,22 +48,22 @@ The following table lists ports and protocols used for network communication bet | Admin/Operator and Developer Consoles | Kubernetes Cluster Ingress Controller | TCP | 80 | HTTP | | Admin/Operator and Developer Consoles | Kubernetes Cluster Ingress Controller | TCP | 443 | HTTPS | | Admin/Operator and Developer Consoles | Kubernetes Cluster Worker Node | TCP/UDP | 30000-32767 | Kubernetes NodePort | -| Admin/Operator and Developer Consoles | <%= vars.k8s_runtime_abbr %> Controller | TCP | 8443 | HTTPSCA | +| Admin/Operator and Developer Consoles | TKGI Controller | TCP | 8443 | HTTPSCA | | All User Consoles (Operator, Developer, Consumer) | Kubernetes App Load-Balancer Svc | TCP/UDP | Varies | varies with apps | | All User Consoles (Operator, Developer, Consumer) | Kubernetes Cluster Ingress Controller | TCP | 80 | HTTP | | All User Consoles (Operator, Developer, Consumer) | Kubernetes Cluster Ingress Controller | TCP | 443 | HTTPS | | All User Consoles (Operator, Developer, Consumer) | Kubernetes Cluster Worker Node | TCP/UDP | 30000-32767 | Kubernetes NodePort | <% if current_page.data.netenv == "nsxt" %> -

Note: The type:NodePort Service type is not supported for <%= vars.k8s_runtime_abbr %> deployments on vSphere with NSX. +

Note: The type:NodePort Service type is not supported for TKGI deployments on vSphere with NSX. Only type:LoadBalancer and Services associated with Ingress rules are supported on vSphere with NSX.

<% else %> <% end %>
-### <%= vars.k8s_runtime_abbr %> Core Ports and Protocols +### TKGI Core Ports and Protocols -The following table lists ports and protocols used for network communication between core <%= vars.k8s_runtime_abbr %> components. +The following table lists ports and protocols used for network communication between core TKGI components. | Source Component | Destination Component | Destination Protocol | Destination Port | Service| | --- | --- | --- | --- | --- | @@ -88,8 +88,8 @@ The following table lists ports and protocols used for network communication bet | Ops Manager | NSX Manager/Controller Node | TCP | 443 | HTTPS| <% else %> <% end %> -| Ops Manager | <%= vars.k8s_runtime_abbr %> Controller | TCP | 22 | SSH| -| Ops Manager | <%= vars.k8s_runtime_abbr %> Controller | TCP | 8443 | HTTPSCA| +| Ops Manager | TKGI Controller | TCP | 22 | SSH| +| Ops Manager | TKGI Controller | TCP | 8443 | HTTPSCA| <% if current_page.data.netenv == "nsxt" || current_page.data.netenv == "vsphere" %> | Ops Manager | vCenter Server | TCP | 443 | HTTPS| | Ops Manager | vSphere ESXI Hosts Mgmt. vmknic | TCP | 443 | HTTPS| @@ -109,19 +109,19 @@ The following table lists ports and protocols used for network communication bet | BOSH Compilation Job VM | BOSH Director | TCP | 25923 | health monitor daemon| | BOSH Compilation Job VM | Harbor Private Image Registry | TCP | 443 | HTTPS| | BOSH Compilation Job VM | Harbor Private Image Registry | TCP | 8853 | BOSH DNS health| -| <%= vars.k8s_runtime_abbr %> Controller | BOSH Director | TCP | 4222 | BOSH nats server| -| <%= vars.k8s_runtime_abbr %> Controller | BOSH Director | TCP | 8443 | HTTPSCA| -| <%= vars.k8s_runtime_abbr %> Controller | BOSH Director | TCP | 25250 | BOSH BlobStore| -| <%= vars.k8s_runtime_abbr %> Controller | BOSH Director | TCP | 25555 | BOSH director rest api| -| <%= vars.k8s_runtime_abbr %> Controller | BOSH Director | TCP | 25923 | health monitor daemon| -| <%= vars.k8s_runtime_abbr %> Controller | Kubernetes Cluster Control Plane/etcd Node | TCP | 8443 | HTTPSCA| -| <%= vars.k8s_runtime_abbr %> Controller | <%= vars.control_plane_db %> VM | TCP | 3306 | tkgi db proxy | +| TKGI Controller | BOSH Director | TCP | 4222 | BOSH nats server| +| TKGI Controller | BOSH Director | TCP | 8443 | HTTPSCA| +| TKGI Controller | BOSH Director | TCP | 25250 | BOSH BlobStore| +| TKGI Controller | BOSH Director | TCP | 25555 | BOSH director rest api| +| TKGI Controller | BOSH Director | TCP | 25923 | health monitor daemon| +| TKGI Controller | Kubernetes Cluster Control Plane/etcd Node | TCP | 8443 | HTTPSCA| +| TKGI Controller | TKGI Database VM | TCP | 3306 | tkgi db proxy | <% if current_page.data.netenv == "nsxt" %> -| <%= vars.k8s_runtime_abbr %> Controller | NSX API VIP | TCP | 443 | HTTPS| +| TKGI Controller | NSX API VIP | TCP | 443 | HTTPS| <% else %> <% end %> <% if current_page.data.netenv == "nsxt" || current_page.data.netenv == "vsphere" %> -| <%= vars.k8s_runtime_abbr %> Controller | vCenter Server | TCP | 443 | HTTPS| +| TKGI Controller | vCenter Server | TCP | 443 | HTTPS| <% else %> <% end %> | Harbor Private Image Registry | BOSH Director | TCP | 4222 | BOSH nats server| @@ -130,7 +130,7 @@ The following table lists ports and protocols used for network communication bet | Harbor Private Image Registry | IP NAS Storage Array | TCP | 111 | NFS RPC portmapper| | Harbor Private Image Registry | IP NAS Storage Array | TCP | 2049 | NFS | | Harbor Private Image Registry | Public CVE Source Database | TCP | 443 | HTTPS| -| kube-system pod/telemetry-agent | <%= vars.k8s_runtime_abbr %> Controller | TCP | 24224 | Fluentd out_forward| +| kube-system pod/telemetry-agent | TKGI Controller | TCP | 24224 | Fluentd out_forward| <% if current_page.data.netenv == "nsxt" %> | Kubernetes Cluster Ingress Controller | NSX API VIP | TCP | 443 | HTTPS| <% else %> @@ -149,8 +149,8 @@ The following table lists ports and protocols used for network communication bet | Kubernetes Cluster Control Plane/etcd Node | NSX API VIP | TCP | 443 | HTTPS| <% else %> <% end %> -| Kubernetes Cluster Control Plane/etcd Node | <%= vars.k8s_runtime_abbr %> Controller | TCP | 8443 | HTTPSCA| -| Kubernetes Cluster Control Plane/etcd Node | <%= vars.k8s_runtime_abbr %> Controller | TCP | 8853 | BOSH DNS health| +| Kubernetes Cluster Control Plane/etcd Node | TKGI Controller | TCP | 8443 | HTTPSCA| +| Kubernetes Cluster Control Plane/etcd Node | TKGI Controller | TCP | 8853 | BOSH DNS health| <% if current_page.data.netenv == "nsxt" || current_page.data.netenv == "vsphere" %> | Kubernetes Cluster Control Plane/etcd Node | vCenter Server | TCP | 443 | HTTPS| <% else %> @@ -165,8 +165,8 @@ The following table lists ports and protocols used for network communication bet | Kubernetes Cluster Worker Node | Kubernetes Cluster Control Plane/etcd Node | TCP | 8443 | HTTPSCA| | Kubernetes Cluster Worker Node | Kubernetes Cluster Control Plane/etcd Node | TCP | 8853 | BOSH DNS health| | Kubernetes Cluster Worker Node | Kubernetes Cluster Control Plane/etcd Node | TCP | 10250 | kubelet API | -| pks-system pod/cert-generator | <%= vars.k8s_runtime_abbr %> Controller | TCP | 24224 | Fluentd out_forward| -| pks-system pod/fluent-bit | <%= vars.k8s_runtime_abbr %> Controller | TCP | 24224 | Fluentd out_forward| +| pks-system pod/cert-generator | TKGI Controller | TCP | 24224 | Fluentd out_forward| +| pks-system pod/fluent-bit | TKGI Controller | TCP | 24224 | Fluentd out_forward| <% if current_page.data.netenv == "nsxt" || current_page.data.netenv == "vsphere" %> <% else %> diff --git a/_preparing-for-bbr.html.md.erb b/_preparing-for-bbr.html.md.erb index 1f4de74be..42cd4704b 100644 --- a/_preparing-for-bbr.html.md.erb +++ b/_preparing-for-bbr.html.md.erb @@ -1,4 +1,4 @@ -Before you use BBR to either back up <%= vars.k8s_runtime_abbr %> or restore <%= vars.k8s_runtime_abbr %> from backup, +Before you use BBR to either back up TKGI or restore TKGI from backup, follow these steps to retrieve deployment information and credentials: * [Verify your BBR Version](#verify-bbr-version) @@ -13,7 +13,7 @@ follow these steps to retrieve deployment information and credentials: ### Verify Your BBR Version Before running BBR, verify that the installed version of BBR is compatible with the version of Ops Manager -your <%= vars.k8s_runtime_abbr %> tile is on: +your TKGI tile is on: 1. To determine the Ops Manager BBR version requirements, see the [Ops Manager Release Notes](https://docs.vmware.com/en/VMware-Tanzu-Operations-Manager/3.0/vmware-tanzu-ops-manager/release-notes.html) @@ -129,7 +129,7 @@ To retrieve your BOSH Director credentials using the Ops Manager API, perform th To obtain BOSH credentials for your BBR operations, perform the following steps: -1. From the Ops Manager Installation Dashboard, click the **<%= vars.product_tile %>** tile. +1. From the Ops Manager Installation Dashboard, click the **Tanzu Kubernetes Grid Integrated Edition** tile. 1. Select the **Credentials** tab. 1. Navigate to **Credentials > UAA Client Credentials**. 1. Record the value for `uaa_client_secret`. @@ -177,7 +177,7 @@ To obtain your BOSH Director's IP address: ``` ### Download the Root CA Certificate -To download the root CA certificate for your <%= vars.product_short %> deployment, +To download the root CA certificate for your Tanzu Kubernetes Grid Integrated Edition deployment, perform the following steps: 1. Open the Ops Manager Installation Dashboard. diff --git a/_prerequisites.html.md.erb b/_prerequisites.html.md.erb index 54bececa5..042fc7816 100644 --- a/_prerequisites.html.md.erb +++ b/_prerequisites.html.md.erb @@ -1,4 +1,4 @@ -If you use an instance of <%= vars.ops_manager %> that you configured previously to install other runtimes, perform the following steps before you install <%= vars.product_short %>: +If you use an instance of Ops Manager that you configured previously to install other runtimes, perform the following steps before you install Tanzu Kubernetes Grid Integrated Edition: 1. Navigate to Ops Manager. 1. Open the **Director Config** pane. diff --git a/_proxy-ops-man.html.md.erb b/_proxy-ops-man.html.md.erb index ecb43c801..8bdb5ec9f 100644 --- a/_proxy-ops-man.html.md.erb +++ b/_proxy-ops-man.html.md.erb @@ -13,8 +13,8 @@ To enable an HTTP proxy for outgoing HTTP/HTTPS traffic from Ops Manager and the 1. Under **No Proxy**, include the hosts that must bypass the proxy. This is required.

- In addition to `127.0.0.1` and `localhost`, include the BOSH Director IP, Ops Manager IP, <%= vars.control_plane %> VM IP, and the <%= vars.control_plane_db %> VM IP. - If the <%= vars.control_plane_db %> is in HA mode (beta), enter all your database IPs in the **No Proxy** field. + In addition to `127.0.0.1` and `localhost`, include the BOSH Director IP, Ops Manager IP, TKGI API VM IP, and the TKGI Database VM IP. + If the TKGI Database is in HA mode (beta), enter all your database IPs in the **No Proxy** field. ``` 127.0.0.1,localhost,BOSH-DIRECTOR-IP,TKGI-API-IP,OPS-MANAGER-IP,TKGI-DATABASE-IP diff --git a/_resource-config.html.md.erb b/_resource-config.html.md.erb index 27ba45379..33ca312fc 100644 --- a/_resource-config.html.md.erb +++ b/_resource-config.html.md.erb @@ -1,16 +1,16 @@ For each job, review the **Automatic** values in the following fields: - * **INSTANCES**: <%= vars.product_short %> defaults to the minimum configuration. + * **INSTANCES**: Tanzu Kubernetes Grid Integrated Edition defaults to the minimum configuration. If you want a highly available configuration (beta), scale the number of VM instances as follows: - 1. To configure your <%= vars.product_short %> database for high availability (beta), - increase the **INSTANCES** value for **<%= vars.control_plane_db %>** to `3`. - 1. To configure your <%= vars.product_short %> API and UAA for high availability (beta), - increase the **INSTANCES** value for **<%= vars.control_plane %>** to `2` or more. + 1. To configure your Tanzu Kubernetes Grid Integrated Edition database for high availability (beta), + increase the **INSTANCES** value for **TKGI Database** to `3`. + 1. To configure your Tanzu Kubernetes Grid Integrated Edition API and UAA for high availability (beta), + increase the **INSTANCES** value for **TKGI API** to `2` or more.

Warning: High availability mode is a beta feature. Do not scale your TKGI API or TKGI Database to more than one instance in production environments.

<% if current_page.data.iaas == "Azure" %>

Note: On Azure, you must reconfigure your - <%= vars.control_plane %> load balancer backend pool - whenever you modify your <%= vars.control_plane %> VM group. - For more information about configuring your <%= vars.control_plane %> + TKGI API load balancer backend pool + whenever you modify your TKGI API VM group. + For more information about configuring your TKGI API load balancer backend pool, see Create a Load Balancer in Configuring an Azure Load Balancer for the TKGI API. @@ -23,11 +23,11 @@ For each job, review the **Automatic** values in the following fields: Provisioning an NSX Load Balancer for the TKGI API Server.

<% end %> - * **VM TYPE**: By default, the **<%= vars.control_plane_db %>** and **<%= vars.control_plane %>** jobs are set to the same **Automatic** VM type. + * **VM TYPE**: By default, the **TKGI Database** and **TKGI API** jobs are set to the same **Automatic** VM type. If you want to adjust this value, we recommend that you select the same VM type for both jobs. -

Note: The Automatic VM TYPE values match the recommended resource configuration for the <%= vars.control_plane %> - and <%= vars.control_plane_db %> jobs. +

Note: The Automatic VM TYPE values match the recommended resource configuration for the TKGI API + and TKGI Database jobs.

- * **PERSISTENT DISK TYPE**: By default, the **<%= vars.control_plane_db %>** and **<%= vars.control_plane %>** jobs are set to the same persistent disk type. + * **PERSISTENT DISK TYPE**: By default, the **TKGI Database** and **TKGI API** jobs are set to the same persistent disk type. If you want to adjust this value, you can change the persistent disk type for each of the jobs independently. Using the same persistent disk type for both jobs is not required. diff --git a/_saml-sso-login.html.md.erb b/_saml-sso-login.html.md.erb index 8176c5748..77d491b7b 100644 --- a/_saml-sso-login.html.md.erb +++ b/_saml-sso-login.html.md.erb @@ -1,4 +1,4 @@ -

Note: If your operator has configured <%= vars.product_short %> to use a SAML identity provider, - you must include an additional SSO flag to use the above command. For information about the SSO flags, see the section for the above command in <%= vars.k8s_runtime_abbr %> CLI. For information about configuring SAML, - see Connecting <%= vars.product_short %> to a SAML Identity Provider +

Note: If your operator has configured Tanzu Kubernetes Grid Integrated Edition to use a SAML identity provider, + you must include an additional SSO flag to use the above command. For information about the SSO flags, see the section for the above command in TKGI CLI. For information about configuring SAML, + see Connecting Tanzu Kubernetes Grid Integrated Edition to a SAML Identity Provider

diff --git a/_scale-to-ha-upgrade.html.md.erb b/_scale-to-ha-upgrade.html.md.erb index 7008c5308..e571ab70b 100644 --- a/_scale-to-ha-upgrade.html.md.erb +++ b/_scale-to-ha-upgrade.html.md.erb @@ -2,22 +2,22 @@ -1. In the **<%= vars.product_tile %>** tile, click **Resource Config**. +1. In the **Tanzu Kubernetes Grid Integrated Edition** tile, click **Resource Config**. -1. To configure your <%= vars.product_short %> database for high availability (HA), -increase the **INSTANCES** value for **<%= vars.control_plane_db %>** to `3`. -1. To configure your <%= vars.product_short %> API and UAA for HA, -increase the **INSTANCES** value for **<%= vars.control_plane %>** to `2` or more. +1. To configure your Tanzu Kubernetes Grid Integrated Edition database for high availability (HA), +increase the **INSTANCES** value for **TKGI Database** to `3`. +1. To configure your Tanzu Kubernetes Grid Integrated Edition API and UAA for HA, +increase the **INSTANCES** value for **TKGI API** to `2` or more.

Note: On Azure, you must reconfigure your - <%= vars.control_plane %> load balancer backend pool - whenever you modify your <%= vars.control_plane %> VM group. - For more information about configuring your <%= vars.control_plane %> + TKGI API load balancer backend pool + whenever you modify your TKGI API VM group. + For more information about configuring your TKGI API load balancer backend pool, see Create a Load Balancer in Configuring an Azure Load Balancer for the TKGI API. diff --git a/_share-endpoint.html.md.erb b/_share-endpoint.html.md.erb index 0c4ac1b1c..a73825494 100644 --- a/_share-endpoint.html.md.erb +++ b/_share-endpoint.html.md.erb @@ -1,7 +1,7 @@ -You need to retrieve the <%= vars.control_plane %> endpoint to allow your organization to use the API to create, update, and delete Kubernetes clusters. +You need to retrieve the TKGI API endpoint to allow your organization to use the API to create, update, and delete Kubernetes clusters. -To retrieve the <%= vars.control_plane %> endpoint, do the following: +To retrieve the TKGI API endpoint, do the following: 1. Navigate to the Ops Manager **Installation Dashboard**. -1. Click the **<%= vars.product_tile %>** tile. -1. Click the **Status** tab and locate the **<%= vars.control_plane %>** job. The IP address of the <%= vars.control_plane %> job is the <%= vars.control_plane %> endpoint. +1. Click the **Tanzu Kubernetes Grid Integrated Edition** tile. +1. Click the **Status** tab and locate the **TKGI API** job. The IP address of the TKGI API job is the TKGI API endpoint. diff --git a/_tmc.html.md.erb b/_tmc.html.md.erb index b931a8479..7eca0652c 100644 --- a/_tmc.html.md.erb +++ b/_tmc.html.md.erb @@ -1,17 +1,17 @@ <% if current_page.data.iaas != "GCP" %> Tanzu Mission Control integration lets you monitor and manage -<%= vars.product_tile %> clusters from the Tanzu Mission Control console, +Tanzu Kubernetes Grid Integrated Edition clusters from the Tanzu Mission Control console, which makes the Tanzu Mission Control console a single point of control for all Kubernetes clusters. For more information about Tanzu Mission Control, see the VMware Tanzu Mission Control home page. -To integrate <%= vars.product_short %> with Tanzu Mission Control: +To integrate Tanzu Kubernetes Grid Integrated Edition with Tanzu Mission Control: -1. Confirm that the <%= vars.control_plane %> VM has internet access and +1. Confirm that the TKGI API VM has internet access and can connect to `cna.tmc.cloud.vmware.com` and the other outbound URLs listed in the [What Happens When You Attach a Cluster](https://docs.vmware.com/en/VMware-Tanzu-Mission-Control/services/tanzumc-concepts/GUID-147472ED-16BB-4AAA-9C35-A951C5ADA88A.html) section of the Tanzu Mission Control Product documentation. -1. Navigate to the **<%= vars.product_tile %>** tile > the **Tanzu Mission Control** pane and +1. Navigate to the **Tanzu Kubernetes Grid Integrated Edition** tile > the **Tanzu Mission Control** pane and select **Yes** under **Tanzu Mission Control Integration**. Tanzu Mission Control Integration @@ -37,15 +37,15 @@ select **Yes** under **Tanzu Mission Control Integration**. For more information about role and access policy, see [Access Control](https://docs.vmware.com/en/VMware-Tanzu-Mission-Control/services/tanzumc-concepts/GUID-EB9C6D83-1132-444F-8218-F264E43F25BD.html) in the VMware Tanzu Mission Control Product documentation.

- - **Tanzu Mission Control Cluster Name Prefix**: Enter a name prefix for identifying the <%= vars.product_short %> clusters in Tanzu Mission Control. + - **Tanzu Mission Control Cluster Name Prefix**: Enter a name prefix for identifying the Tanzu Kubernetes Grid Integrated Edition clusters in Tanzu Mission Control. 1. Click **Save**. -

Warning: After the <%= vars.product_tile %> tile is deployed with a configured cluster group, the cluster group cannot be updated.

+

Warning: After the Tanzu Kubernetes Grid Integrated Edition tile is deployed with a configured cluster group, the cluster group cannot be updated.

Note: When you upgrade your Kubernetes clusters and have Tanzu Mission Control integration enabled, existing clusters will be attached to Tanzu Mission Control.

<% else %> -<%= vars.product_short %> does not support Tanzu Mission Control integration on GCP. +Tanzu Kubernetes Grid Integrated Edition does not support Tanzu Mission Control integration on GCP. Skip this configuration pane. <% end %> diff --git a/_uaa-admin-login.html.md.erb b/_uaa-admin-login.html.md.erb index 8913f8093..82658a5c7 100644 --- a/_uaa-admin-login.html.md.erb +++ b/_uaa-admin-login.html.md.erb @@ -1,8 +1,8 @@ -Before creating <%= vars.k8s_runtime_abbr %> users, you must log in to the UAA server as a UAA admin. To log in to the UAA server, do the following: +Before creating TKGI users, you must log in to the UAA server as a UAA admin. To log in to the UAA server, do the following: 1. Retrieve the UAA management admin client secret: - 1. In a web browser, navigate to the Ops Manager **Installation Dashboard** and click the **<%= vars.product_tile %>** tile. + 1. In a web browser, navigate to the Ops Manager **Installation Dashboard** and click the **Tanzu Kubernetes Grid Integrated Edition** tile. 1. Click the **Credentials** tab. @@ -16,8 +16,8 @@ Before creating <%= vars.k8s_runtime_abbr %> users, you must log in to the UAA s Where: - * `TKGI-API` is the domain name of your <%= vars.control_plane %> server. You entered this domain name in the **<%= vars.product_tile %>** tile > **<%= vars.control_plane %>** > **API Hostname (FQDN)**. - * `CERTIFICATE-PATH` is the path to your Ops Manager root CA certificate. Provide this certificate to validate the <%= vars.control_plane %> certificate with SSL. + * `TKGI-API` is the domain name of your TKGI API server. You entered this domain name in the **Tanzu Kubernetes Grid Integrated Edition** tile > **TKGI API** > **API Hostname (FQDN)**. + * `CERTIFICATE-PATH` is the path to your Ops Manager root CA certificate. Provide this certificate to validate the TKGI API certificate with SSL. * If you are logged in to the Ops Manager VM, specify `/var/tempest/workspaces/default/root_ca_certificate` as the path. This is the default location of the root certificate on the Ops Manager VM. * If you downloaded the Ops Manager root CA certificate to your machine, specify the path where you stored the certificate. diff --git a/_uaa-scopes.html.md.erb b/_uaa-scopes.html.md.erb index b61f58fdb..e7951ec24 100644 --- a/_uaa-scopes.html.md.erb +++ b/_uaa-scopes.html.md.erb @@ -1,6 +1,6 @@ -By assigning UAA scopes, you grant users the ability to create, manage, and audit Kubernetes clusters in <%= vars.product_short %>. +By assigning UAA scopes, you grant users the ability to create, manage, and audit Kubernetes clusters in Tanzu Kubernetes Grid Integrated Edition. -A UAA admin user can assign the following UAA scopes to <%= vars.product_short %> users: +A UAA admin user can assign the following UAA scopes to Tanzu Kubernetes Grid Integrated Edition users: * `pks.clusters.admin`: Accounts with this scope can create and access all clusters. * `pks.clusters.manage`: Accounts with this scope can create and access their own clusters. diff --git a/_uaa.html.md.erb b/_uaa.html.md.erb index 20f6ebb2d..d3c56dcf9 100644 --- a/_uaa.html.md.erb +++ b/_uaa.html.md.erb @@ -1,16 +1,16 @@ To configure the UAA server: 1. Click **UAA**. -1. Under **<%= vars.control_plane %> Access Token Lifetime**, enter a time in seconds for the -<%= vars.control_plane %> access token lifetime. This field defaults to `600`. +1. Under **TKGI API Access Token Lifetime**, enter a time in seconds for the +TKGI API access token lifetime. This field defaults to `600`. UAA pane configuration -1. Under **<%= vars.control_plane %> Refresh Token Lifetime**, enter a time in seconds for the -<%= vars.control_plane %> refresh token lifetime. This field defaults to `21600`. -1. Under **<%= vars.k8s_runtime_abbr %> Cluster Access Token Lifetime**, enter a time in seconds for the +1. Under **TKGI API Refresh Token Lifetime**, enter a time in seconds for the +TKGI API refresh token lifetime. This field defaults to `21600`. +1. Under **TKGI Cluster Access Token Lifetime**, enter a time in seconds for the cluster access token lifetime. This field defaults to `600`. -1. Under **<%= vars.k8s_runtime_abbr %> Cluster Refresh Token Lifetime**, enter a time in seconds for the +1. Under **TKGI Cluster Refresh Token Lifetime**, enter a time in seconds for the cluster refresh token lifetime. This field defaults to `21600`.

Note: <%= vars.recommended_by %> recommends using the default UAA token timeout values. @@ -19,10 +19,10 @@ after six hours.

1. Under **Configure created clusters to use UAA as the OIDC provider**, select **Enabled** or **Disabled**. This is a global default setting for -<%= vars.k8s_runtime_abbr %>-provisioned clusters. For more information, see +TKGI-provisioned clusters. For more information, see [OIDC Provider for Kubernetes Clusters](oidc-provider.html).

- To configure <%= vars.product_short %> to use UAA as the OIDC provider: + To configure Tanzu Kubernetes Grid Integrated Edition to use UAA as the OIDC provider: 1. Under **Configure created clusters to use UAA as the OIDC provider**, select **Enabled**. ![OIDC configuration check box](images/oidc.png) @@ -44,17 +44,17 @@ select **Enabled** or **Disabled**. This is a global default setting for

Warning: <%= vars.recommended_by %> recommends adding OIDC prefixes to prevent users and groups from gaining unintended cluster privileges. If you change the above values for a - pre-existing <%=vars.product_short %> installation, you must change any + pre-existing Tanzu Kubernetes Grid Integrated Edition installation, you must change any existing role bindings that bind to a user name or group. If you do not change your role bindings, developers cannot access Kubernetes clusters. For instructions, see Managing Cluster Access and Permissions.

1. (Optional) For **TKGI cluster client redirect URIs**, enter one or more comma-delimited UAA redirect URIs. Configure **TKGI cluster client redirect URIs** to assign persistent UAA `cluster_client` `redirect_uri` URIs to your clusters. -UAA redirect URIs configured in the **TKGI cluster client redirect URIs** field persist through cluster updates and <%= vars.k8s_runtime_abbr %> upgrades. +UAA redirect URIs configured in the **TKGI cluster client redirect URIs** field persist through cluster updates and TKGI upgrades. 1. Select one of the following options: * To use an internal user account store for UAA, select **Internal UAA**. Click **Save** and continue to [(Optional) Host Monitoring](#syslog). * To use LDAP for UAA, select **LDAP Server** and continue to - [Connecting <%= vars.product_short %> to an LDAP Server](configuring-ldap.html). + [Connecting Tanzu Kubernetes Grid Integrated Edition to an LDAP Server](configuring-ldap.html). * To use SAML for UAA, select **SAML Identity Provider** and continue to - [Connecting <%= vars.product_short %> to a SAML Identity Provider](configuring-saml.html). + [Connecting Tanzu Kubernetes Grid Integrated Edition to a SAML Identity Provider](configuring-saml.html). diff --git a/_usage-data.html.md.erb b/_usage-data.html.md.erb index 8ac0aa650..b01df877f 100644 --- a/_usage-data.html.md.erb +++ b/_usage-data.html.md.erb @@ -1,8 +1,8 @@ -<%= vars.product_short %>-provisioned clusters send usage data to the <%= vars.k8s_runtime_abbr %> control plane for storage. +Tanzu Kubernetes Grid Integrated Edition-provisioned clusters send usage data to the TKGI control plane for storage. The VMware Customer Experience Improvement Program (CEIP) provides the option to also send the cluster usage data to VMware to improve customer experience. -To configure <%= vars.product_short %> CEIP Program settings: +To configure Tanzu Kubernetes Grid Integrated Edition CEIP Program settings: 1. Click **CEIP**. 1. Review the information about the CEIP. @@ -18,7 +18,7 @@ To configure <%= vars.product_short %> CEIP Program settings: * (Optional) Enter your entitlement account number or Tanzu customer number. If you are a VMware customer, you can find your entitlement account number in your **Account Summary** on [my.vmware.com](https://my.vmware.com). If you are a Pivotal customer, you can find your Pivotal Customer Number in your Pivotal Order Confirmation email. - * (Optional) Enter a descriptive name for your <%= vars.k8s_runtime_abbr %> installation. + * (Optional) Enter a descriptive name for your TKGI installation. The label you assign to this installation will be used in CEIP reports to identify the environment. 1. To provide information about the purpose for this installation, select an option. ![CEIP installation type](./images/ceip-telemetry-type.png) diff --git a/_vrealize-logs.html.md.erb b/_vrealize-logs.html.md.erb index f16cb1738..95db6ca4e 100644 --- a/_vrealize-logs.html.md.erb +++ b/_vrealize-logs.html.md.erb @@ -21,4 +21,4 @@ The default value `0` means that the rate is not limited, which suffices for man A large number might result in dropping too many log entries.

1. Click **Save**. These settings apply to any clusters created after you have saved these configuration settings and clicked **Apply Changes**. If the **Upgrade all clusters errand** has been enabled, these settings are also applied to existing clusters. -

Note: The <%= vars.product_tile %> tile does not validate your vRLI configuration settings. To verify your setup, look for log entries in vRLI.

+

Note: The Tanzu Kubernetes Grid Integrated Edition tile does not validate your vRLI configuration settings. To verify your setup, look for log entries in vRLI.

diff --git a/_vsphere_versions.html.md.erb b/_vsphere_versions.html.md.erb index 20ddf8032..4aa8906fd 100644 --- a/_vsphere_versions.html.md.erb +++ b/_vsphere_versions.html.md.erb @@ -1,2 +1,2 @@ -For <%= vars.product_short %> on vSphere version requirements, refer to the VMware Product Interoperability Matrices.

+For Tanzu Kubernetes Grid Integrated Edition on vSphere version requirements, refer to the VMware Product Interoperability Matrices.

diff --git a/about-lb.html.md.erb b/about-lb.html.md.erb index 3c87fa9cf..68d2bd8bd 100644 --- a/about-lb.html.md.erb +++ b/about-lb.html.md.erb @@ -3,31 +3,31 @@ title: Load Balancers in Tanzu Kubernetes Grid Integrated Edition owner: TKGI --- -This topic describes the <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) load balancers for the <%= vars.control_plane %> and <%= vars.k8s_runtime_abbr %> clusters and workloads. +This topic describes the VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) load balancers for the TKGI API and TKGI clusters and workloads.

## Overview -Load balancers used with <%= vars.k8s_runtime_abbr %> differ by the type of deployment: +Load balancers used with TKGI differ by the type of deployment: -* [Load Balancers in <%= vars.product_short %> Deployments without NSX](#without-nsx-t) -* [Load Balancers in <%= vars.product_short %> Deployments on vSphere with NSX](#with-nsx-t) +* [Load Balancers in Tanzu Kubernetes Grid Integrated Edition Deployments without NSX](#without-nsx-t) +* [Load Balancers in Tanzu Kubernetes Grid Integrated Edition Deployments on vSphere with NSX](#with-nsx-t)

-## Load Balancers in <%= vars.product_short %> Deployments without NSX +## Load Balancers in Tanzu Kubernetes Grid Integrated Edition Deployments without NSX -For <%= vars.product_short %> deployments on GCP, AWS, or vSphere without NSX, you can configure load balancers for the following: +For Tanzu Kubernetes Grid Integrated Edition deployments on GCP, AWS, or vSphere without NSX, you can configure load balancers for the following: -* **[<%= vars.control_plane %> Load Balancer](#tkgi-api)**: Configuring this load balancer enables you to run <%= vars.k8s_runtime_abbr %> Command Line Interface (<%= vars.k8s_runtime_abbr %> CLI) commands from your local workstation. +* **[TKGI API Load Balancer](#tkgi-api)**: Configuring this load balancer enables you to run TKGI Command Line Interface (TKGI CLI) commands from your local workstation. * **[Kubernetes Cluster Load Balancers](#cluster)**: Configuring a load balancer for each new cluster enables you to run Kubernetes CLI (kubectl) commands on the cluster. * **[Workload Load Balancers)](#workload)**: Configuring a load balancer for your application workloads enables external access to the services that run on your cluster. The following diagram, applicable to GCP, AWS, and vSphere without NSX, shows where each of the above load balancers can be used -within your <%= vars.product_short %> deployment. +within your Tanzu Kubernetes Grid Integrated Edition deployment. <%= image_tag("images/lb-diagram.png", :alt => "TKGI load balancer diagram including all load balancer options for TKGI deployments without NSX") %> <%#= Image source: https://docs.google.com/drawings/d/17Zzznn0J8j3sEICByPnAF1mSKg0pisqtrZNWi8KbOh8/edit %> @@ -35,17 +35,17 @@ If you use either vSphere without NSX or GCP, you are expected to create your ow If your cloud provider does not offer load balancing, you can use any external TCP or HTTPS load balancer of your choice.
-### <%= vars.control_plane %> Load Balancer +### TKGI API Load Balancer -The <%= vars.control_plane %> load balancer enables you to access the <%= vars.control_plane %> from outside the network on <%= vars.product_short %> deployments on GCP, AWS, and on vSphere without NSX. -For example, configuring a load balancer for the <%= vars.control_plane %> enables you to run <%= vars.k8s_runtime_abbr %> CLI commands from your local workstation. +The TKGI API load balancer enables you to access the TKGI API from outside the network on Tanzu Kubernetes Grid Integrated Edition deployments on GCP, AWS, and on vSphere without NSX. +For example, configuring a load balancer for the TKGI API enables you to run TKGI CLI commands from your local workstation. -For information about configuring the <%= vars.control_plane %> load balancer on vSphere without NSX, see [Configuring <%= vars.control_plane %> Load Balancer](./vsphere-configure-api.html). +For information about configuring the TKGI API load balancer on vSphere without NSX, see [Configuring TKGI API Load Balancer](./vsphere-configure-api.html).
### Kubernetes Cluster Load Balancers -When you create an <%= vars.product_short %> cluster on GCP, AWS, and on vSphere without NSX, +When you create an Tanzu Kubernetes Grid Integrated Edition cluster on GCP, AWS, and on vSphere without NSX, you must configure external access to the cluster by creating an external TCP or HTTPS load balancer. The load balancer enables the Kubernetes CLI to communicate with the cluster. @@ -57,14 +57,14 @@ To enable kubectl to access the cluster without a load balancer, you can do one For more information about configuring a cluster load balancer, see the following: -* [Creating and Configuring a GCP Load Balancer for <%= vars.product_short %> Clusters](gcp-cluster-load-balancer.html) -* [Creating and Configuring an AWS Load Balancer for <%= vars.product_short %> Clusters](aws-cluster-load-balancer.html) -* [Creating and Configuring an Azure Load Balancer for <%= vars.product_short %> Clusters](azure-cluster-load-balancer.html) +* [Creating and Configuring a GCP Load Balancer for Tanzu Kubernetes Grid Integrated Edition Clusters](gcp-cluster-load-balancer.html) +* [Creating and Configuring an AWS Load Balancer for Tanzu Kubernetes Grid Integrated Edition Clusters](aws-cluster-load-balancer.html) +* [Creating and Configuring an Azure Load Balancer for Tanzu Kubernetes Grid Integrated Edition Clusters](azure-cluster-load-balancer.html)
### Workload Load Balancers -To enable external access to your <%= vars.product_short %> app on GCP, AWS, and on vSphere without NSX, you can either create a load balancer or expose a static port on your workload. +To enable external access to your Tanzu Kubernetes Grid Integrated Edition app on GCP, AWS, and on vSphere without NSX, you can either create a load balancer or expose a static port on your workload. For information about configuring a load balancer for your app workload, see [Deploying and Exposing Basic Linux Workloads](deploy-workloads.html). @@ -77,15 +77,15 @@ See the [AWS Prerequisites](deploy-workloads.html#aws) section of _Deploying and A Kubernetes ingress controller sits behind a load balancer, routing HTTP and HTTPS requests from outside the cluster to services within the cluster. Kubernetes ingress resources can be configured to load balance traffic, provide externally reachable URLs to services, and manage other aspects of network traffic. -If you add an ingress controller to your <%= vars.product_short %> deployment, traffic routing is controlled by the ingress resource rules you define. -<%= vars.recommended_by %> recommends configuring <%= vars.product_short %> deployments with both a workload load balancer and an ingress controller. +If you add an ingress controller to your Tanzu Kubernetes Grid Integrated Edition deployment, traffic routing is controlled by the ingress resource rules you define. +<%= vars.recommended_by %> recommends configuring Tanzu Kubernetes Grid Integrated Edition deployments with both a workload load balancer and an ingress controller. The following diagram shows how the ingress routing can be used -within your <%= vars.product_short %> deployment. +within your Tanzu Kubernetes Grid Integrated Edition deployment. <%= image_tag("images/ingress-routing.png", :alt => "TKGI diagram that shows ingress routing for both Istio and NSX") %> <%#= Image source: https://docs.google.com/drawings/d/1IB2juuTQlwJ4QpRaMmjFvGAC3cS7irFy3OII-pb6GGE/edit %> -The load balancer on <%= vars.product_short %> on vSphere with NSX is automatically provisioned with Kubernetes ingress resources +The load balancer on Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX is automatically provisioned with Kubernetes ingress resources without the need to deploy and configure an additional ingress controller. For information about deploying a load balancer configured with ingress routing on GCP, AWS, Azure, and vSphere without NSX, see [Configuring Ingress Routing](configure-ingress.html). @@ -93,14 +93,14 @@ For information about ingress routing on vSphere with NSX, see [Configuring Ingr

-## Load Balancers in <%= vars.product_short %> Deployments on vSphere with NSX +## Load Balancers in Tanzu Kubernetes Grid Integrated Edition Deployments on vSphere with NSX -<%= vars.product_short %> deployments on vSphere with NSX in high-availability mode require you configure a load balancer to access the <%= vars.control_plane %>. -To configure an NSX load balancer for <%= vars.control_plane %> traffic, see [Provisioning an NSX Load Balancer for the TKGI API Server](nsxt-lb-tkgi-api.html). +Tanzu Kubernetes Grid Integrated Edition deployments on vSphere with NSX in high-availability mode require you configure a load balancer to access the TKGI API. +To configure an NSX load balancer for TKGI API traffic, see [Provisioning an NSX Load Balancer for the TKGI API Server](nsxt-lb-tkgi-api.html). -<%= vars.k8s_runtime_abbr %> deployments on vSphere with NSX in singleton mode require you configure only a DNAT rule so that the <%= vars.control_plane %> host is accessible. -These <%= vars.k8s_runtime_abbr %> deployments do not require you to configure a load balancer to access the <%= vars.control_plane %>. -For more information, see [Share the <%= vars.product_short %> Endpoint](installing-nsx-t.html#retrieve-endpoint) in _Installing <%= vars.product_short %> on vSphere with NSX Integration_. +TKGI deployments on vSphere with NSX in singleton mode require you configure only a DNAT rule so that the TKGI API host is accessible. +These TKGI deployments do not require you to configure a load balancer to access the TKGI API. +For more information, see [Share the Tanzu Kubernetes Grid Integrated Edition Endpoint](installing-nsx-t.html#retrieve-endpoint) in _Installing Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX Integration_. At runtime, NSX automatically handles load balancer creation, configuration, and deletion as part of the Kubernetes cluster create, update, and delete process. When a new Kubernetes cluster is created, NSX creates and configures a dedicated load balancer tied to it. The load balancer is a shared resource designed to provide efficient traffic distribution to control plane nodes as well as services deployed on worker nodes. @@ -115,12 +115,12 @@ Virtual server instances are created on the load balancer to provide access to t Load balancers are deployed in high-availability mode so that they are resilient to potential failures and able to recover quickly from critical conditions. -

Note: The NodePort Service type is not supported for <%= vars.product_short %> deployments on vSphere with NSX. Only type:LoadBalancerServices and Services associated with Ingress rules are supported on vSphere with NSX.

+

Note: The NodePort Service type is not supported for Tanzu Kubernetes Grid Integrated Edition deployments on vSphere with NSX. Only type:LoadBalancerServices and Services associated with Ingress rules are supported on vSphere with NSX.


### Resizing Load Balancers -When a new Kubernetes cluster is provisioned using the <%= vars.control_plane %>, NSX creates a dedicated load balancer for that new cluster. By default, the size of the load balancer is set to Small. +When a new Kubernetes cluster is provisioned using the TKGI API, NSX creates a dedicated load balancer for that new cluster. By default, the size of the load balancer is set to Small. With network profiles, you can change the size of the load balancer deployed by NSX at the time of cluster creation. For information about network profiles, see [Using Network Profiles (NSX Only)](network-profiles.html). diff --git a/admission-plugins-disable.html.md.erb b/admission-plugins-disable.html.md.erb index 22bd3d08a..a025de656 100644 --- a/admission-plugins-disable.html.md.erb +++ b/admission-plugins-disable.html.md.erb @@ -3,16 +3,16 @@ title: Deactivating Admission Control Plugins for Tanzu Kubernetes Grid Integrat owner: TKGI --- -This topic describes how to deactivate <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) cluster admission control plugins. +This topic describes how to deactivate VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) cluster admission control plugins. -For more information about Admission Control Plugins, see [Using Admission Control Plugins for <%= vars.product_short %> Clusters](./admission-plugins.html). +For more information about Admission Control Plugins, see [Using Admission Control Plugins for Tanzu Kubernetes Grid Integrated Edition Clusters](./admission-plugins.html). ## Deactivating a Single Admission Control Plugin To deactivate a single admission control plugin, do the following: -1. Log in to <%= vars.ops_manager_full %> (<%= vars.ops_manager %>). -1. Click the <%= vars.product_tile %> tile. +1. Log in to VMware Tanzu Operations Manager (Ops Manager). +1. Click the Tanzu Kubernetes Grid Integrated Edition tile. 1. Select the plan where you configured the admission control plugin, such as **Plan 1**. 1. Deselect the admission control plugin. 1. Click **Save**. @@ -21,7 +21,7 @@ To deactivate a single admission control plugin, do the following: 1. Click **Apply Changes**. Alternatively, instead of enabling **Upgrade all clusters errand**, -you can upgrade individual Kubernetes clusters through the <%= vars.k8s_runtime_abbr %> Command Line Interface (<%= vars.k8s_runtime_abbr %> CLI). +you can upgrade individual Kubernetes clusters through the TKGI Command Line Interface (TKGI CLI). For instructions on upgrading individual Kubernetes clusters, see [Upgrading Clusters](upgrade-clusters.html). ## Deactivating an Orphaned Admission Control Plugin @@ -36,7 +36,7 @@ To deactivate an orphaned Admission control Plugin, complete the following workf 1. Obtain the FQDN, user name, and password of your Ops Manager. 1. Authenticate into the Ops Manager API and retrieve a UAA access token to access Ops Manager. For more information, see [Using the Ops Manager API](https://docs.vmware.com/en/VMware-Tanzu-Operations-Manager/3.0/vmware-tanzu-ops-manager/install-ops-man-api.html). -1. Obtain the BOSH deployment name for the <%= vars.product_short %> tile by doing one of the following options: +1. Obtain the BOSH deployment name for the Tanzu Kubernetes Grid Integrated Edition tile by doing one of the following options: 1. Option 1: Use the Ops Manager API: 1. In a terminal, run the following command: @@ -46,10 +46,10 @@ To deactivate an orphaned Admission control Plugin, complete the following workf 1. In the output, locate the `installation_name` that begins with `pivotal-container-service`. 1. Copy the entire BOSH deployment name, including the unique GUID. For example, `pivotal-container-service-4b48fc5b704d54c6c7de`. 1. Option 2: Use the Ops Manager UI: - 1. In Ops Manager, click the <%= vars.product_short %> tile. + 1. In Ops Manager, click the Tanzu Kubernetes Grid Integrated Edition tile. 1. Copy the BOSH deployment name including the GUID from the URL: - <%= vars.k8s_runtime_abbr %> GUID + TKGI GUID

The deployment name contains "pivotal-container-service" and a unique GUID string. For example, `pivotal-container-service-4b48fc5b704d54c6c7de`. 1. To deactivate the orphaned admission control plugin, run the following Ops Manager API command: @@ -63,7 +63,7 @@ To deactivate an orphaned Admission control Plugin, complete the following workf Where: * `OPS-MAN-FQDN` is the URL of your Ops Manager. - * `pivotal-container-service-GUID` is the BOSH deployment name of your <%= vars.product_short %> that you retrieved earlier in this procedure. + * `pivotal-container-service-GUID` is the BOSH deployment name of your Tanzu Kubernetes Grid Integrated Edition that you retrieved earlier in this procedure. * `UAA-ACCESS-TOKEN` is the UAA token you retrieved earlier in this procedure. * `PLAN-NUMBER` is the plan configuration you want to update. For example, `plan1` or `plan2`. @@ -79,7 +79,7 @@ To deactivate an orphaned Admission control Plugin, complete the following workf 1. Validate your manifest change in the Ops Manager UI. Do the following: 1. Log in to Ops Manager. 1. Select **Review Pending Changes**. - 1. On the Review Pending Changes pane, navigate to the <%= vars.product_short %> section and select **SEE CHANGES**. + 1. On the Review Pending Changes pane, navigate to the Tanzu Kubernetes Grid Integrated Edition section and select **SEE CHANGES**. 1. Verify that the admission control plugins are displayed as removed in the **Manifest** section. For example: Manifest diff displays removed admission control plugins diff --git a/admission-plugins.html.md.erb b/admission-plugins.html.md.erb index efda81066..07f5c5409 100644 --- a/admission-plugins.html.md.erb +++ b/admission-plugins.html.md.erb @@ -3,18 +3,18 @@ title: Using Admission Control Plugins for Tanzu Kubernetes Grid Integrated Edit owner: TKGI --- -The topics below describe how to manage and use admission control plugins for <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) clusters. +The topics below describe how to manage and use admission control plugins for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) clusters. For more information about Admission Controllers, see [Using Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) in the Kubernetes documentation. -For details on the admission control plugins supported by <%= vars.k8s_runtime_abbr %>, see: +For details on the admission control plugins supported by TKGI, see: -* [Enabling the PodSecurityAdmission Plugin for <%= vars.product_short %> Clusters and Using Pod Security Admission](./pod-security-admission.html) -* [Enabling the SecurityContextDeny Admission Plugin for <%= vars.product_short %> Clusters](./security-context-deny.html) -

Note: Support for SecurityContextDeny admission controller has been removed in <%= vars.k8s_runtime_abbr %> v1.18. The SecurityContextDeny admission controller has been deprecated, and the Kubernetes community recommends the controller not be used. - Pod security admission (PSA) is the preferred method for providing a more secure Kubernetes environment. For more information about PSA, see Pod Security Admission in <%= vars.k8s_runtime_abbr %>. +* [Enabling the PodSecurityAdmission Plugin for Tanzu Kubernetes Grid Integrated Edition Clusters and Using Pod Security Admission](./pod-security-admission.html) +* [Enabling the SecurityContextDeny Admission Plugin for Tanzu Kubernetes Grid Integrated Edition Clusters](./security-context-deny.html) +

Note: Support for SecurityContextDeny admission controller has been removed in TKGI v1.18. The SecurityContextDeny admission controller has been deprecated, and the Kubernetes community recommends the controller not be used. + Pod security admission (PSA) is the preferred method for providing a more secure Kubernetes environment. For more information about PSA, see Pod Security Admission in TKGI.

To deactivate an admission control plugin, see: -* [Deactivating Admission Control Plugins for <%= vars.product_short %> Clusters](./admission-plugins-disable.html) +* [Deactivating Admission Control Plugins for Tanzu Kubernetes Grid Integrated Edition Clusters](./admission-plugins-disable.html) diff --git a/api-auth.html.md.erb b/api-auth.html.md.erb index 1ae46a09b..6f3cd8c2d 100644 --- a/api-auth.html.md.erb +++ b/api-auth.html.md.erb @@ -3,29 +3,29 @@ title: TKGI API Authentication owner: TKGI --- -This topic describes how the <%= vars.product_full %> API (<%= vars.control_plane %>) works with User Account and Authentication (UAA) to manage <%= vars.k8s_runtime_abbr %> deployment authentication and authorization. +This topic describes how the VMware Tanzu Kubernetes Grid Integrated Edition API (TKGI API) works with User Account and Authentication (UAA) to manage TKGI deployment authentication and authorization. -## Authentication of <%= vars.control_plane %> Requests +## Authentication of TKGI API Requests -Before users can log in and use the <%= vars.k8s_runtime_abbr %> CLI, you must configure <%= vars.control_plane %> access with UAA. For more information, -see [Managing Tanzu Kubernetes Grid Integrated Edition Users with UAA](manage-users.html) and [Logging in to <%= vars.product_short %>](login.html). +Before users can log in and use the TKGI CLI, you must configure TKGI API access with UAA. For more information, +see [Managing Tanzu Kubernetes Grid Integrated Edition Users with UAA](manage-users.html) and [Logging in to Tanzu Kubernetes Grid Integrated Edition](login.html). You use the UAA Command Line Interface (UAAC) to target the UAA server and request an access token for the UAA admin user. If your request is successful, the UAA server returns the access token. -The UAA admin access token authorizes you to make requests to the <%= vars.control_plane %> using the <%= vars.k8s_runtime_abbr %> CLI and grant cluster access to new or existing users. +The UAA admin access token authorizes you to make requests to the TKGI API using the TKGI CLI and grant cluster access to new or existing users. -When a user with cluster access logs in to the <%= vars.k8s_runtime_abbr %> CLI, the CLI requests an access token for the user from the UAA server. -If the request is successful, the UAA server returns an access token to the <%= vars.k8s_runtime_abbr %> CLI. -When the user runs <%= vars.k8s_runtime_abbr %> CLI commands, for example, `tkgi clusters`, the CLI sends the request to the <%= vars.control_plane %> server and includes the user's UAA token. +When a user with cluster access logs in to the TKGI CLI, the CLI requests an access token for the user from the UAA server. +If the request is successful, the UAA server returns an access token to the TKGI CLI. +When the user runs TKGI CLI commands, for example, `tkgi clusters`, the CLI sends the request to the TKGI API server and includes the user's UAA token. -The <%= vars.control_plane %> sends a request to the UAA server to validate the user's token. -If the UAA server confirms that the token is valid, the <%= vars.control_plane %> uses the cluster information from the <%= vars.k8s_runtime_abbr %> broker to respond to the request. +The TKGI API sends a request to the UAA server to validate the user's token. +If the UAA server confirms that the token is valid, the TKGI API uses the cluster information from the TKGI broker to respond to the request. For example, if the user runs `tkgi clusters`, the CLI returns a list of the clusters that the user is authorized to manage. -##Routing to the <%= vars.control_plane %> VM +##Routing to the TKGI API VM -The <%= vars.control_plane %> server and the UAA server use different port numbers on the API VM. -For example, if your <%= vars.control_plane %> domain is `api.tkgi.example.com`, you can reach your <%= vars.control_plane %> and UAA servers at the following URLs: +The TKGI API server and the UAA server use different port numbers on the API VM. +For example, if your TKGI API domain is `api.tkgi.example.com`, you can reach your TKGI API and UAA servers at the following URLs: @@ -33,7 +33,7 @@ For example, if your <%= vars.control_plane %> domain is `api.tkgi.example.com`, - + @@ -42,11 +42,11 @@ For example, if your <%= vars.control_plane %> domain is `api.tkgi.example.com`,
URL
<%= vars.control_plane %>TKGI API api.tkgi.example.com:9021
-Refer to **Ops Manager** > **<%= vars.product_tile %> tile** > **<%= vars.control_plane %>** > **API Hostname (FQDN)** for your <%= vars.control_plane %> domain. +Refer to **Ops Manager** > **Tanzu Kubernetes Grid Integrated Edition tile** > **TKGI API** > **API Hostname (FQDN)** for your TKGI API domain. Load balancer implementations differ by deployment environment. -For <%= vars.product_short %> deployments on GCP, AWS, or vSphere without NSX, you configure a load balancer to access -the <%= vars.control_plane %> when you install the <%= vars.product_tile %> tile. -For example, see [Configuring <%= vars.control_plane %> Load Balancer](./vsphere-configure-api.html). +For Tanzu Kubernetes Grid Integrated Edition deployments on GCP, AWS, or vSphere without NSX, you configure a load balancer to access +the TKGI API when you install the Tanzu Kubernetes Grid Integrated Edition tile. +For example, see [Configuring TKGI API Load Balancer](./vsphere-configure-api.html). -For overview information about load balancers in <%= vars.product_short %>, see [Load Balancers in <%= vars.product_short %> Deployments without NSX](about-lb.html#without-nsx-t). +For overview information about load balancers in Tanzu Kubernetes Grid Integrated Edition, see [Load Balancers in Tanzu Kubernetes Grid Integrated Edition Deployments without NSX](about-lb.html#without-nsx-t). diff --git a/aws-api-load-balancer.html.md.erb b/aws-api-load-balancer.html.md.erb index 8f774910f..32a851cf7 100644 --- a/aws-api-load-balancer.html.md.erb +++ b/aws-api-load-balancer.html.md.erb @@ -12,10 +12,10 @@ To configure a load balancer for a different environment, see: ## Overview -<%= vars.recommended_by %> recommends that you create a <%= vars.control_plane %> -load balancer when installing <%= vars.product_short %> on AWS. +<%= vars.recommended_by %> recommends that you create a TKGI API +load balancer when installing Tanzu Kubernetes Grid Integrated Edition on AWS. -To configure your <%= vars.control_plane %> Load Balancer on AWS, complete the following: +To configure your TKGI API Load Balancer on AWS, complete the following: * [Define Load Balancer](#define-lb) * [Assign Security Groups](#assign-security-groups) @@ -38,14 +38,14 @@ Perform the following steps: 1. On the **Define Load Balancer** page, complete the **Basic Configuration** section as follows: 1. **Load Balancer name**: Name the load balancer. <%= vars.recommended_by %> recommends naming your load balancer `tkgi-api`. 1. **Create LB inside**: Select the VPC where you installed Ops Manager. - 1. **Create an internal load balancer**: Do not activate this check box. The <%= vars.product_short %> API load balancer must be internet-facing. + 1. **Create an internal load balancer**: Do not activate this check box. The Tanzu Kubernetes Grid Integrated Edition API load balancer must be internet-facing. 1. Complete the **Listeners Configuration** section as follows: 1. Configure the listener for UAA as follows: * Under **Load Balancer Protocol**, select **TCP**. * Under **Load Balancer Port**, enter `8443`. * Under **Instance Protocol**, select **TCP**. * Under **Instance Port**, enter `8443`. - 1. Configure the listener for <%= vars.product_short %> API Server as follows: + 1. Configure the listener for Tanzu Kubernetes Grid Integrated Edition API Server as follows: * Under **Load Balancer Protocol**, select **TCP**. * Under **Load Balancer Port**, enter `9021`. * Under **Instance Protocol**, select **TCP**. @@ -87,12 +87,12 @@ Perform the following steps to configure the health check: Perform the following steps to add EC2 Instances for the Load Balancer: 1. Open Ops Manager to the **Installation Dashboard** pane. -1. Click the **<%= vars.product_tile %>** tile. +1. Click the **Tanzu Kubernetes Grid Integrated Edition** tile. 1. Open the **Resource Config** pane. -1. Select **<%= vars.control_plane %>**. +1. Select **TKGI API**. 1. Review **Load Balancers**. -1. If **Load Balancers** does not include the load balancer to use for the <%= vars.control_plane %> VM: - 1. Input the load balancer to use for <%= vars.control_plane %> VM. +1. If **Load Balancers** does not include the load balancer to use for the TKGI API VM: + 1. Input the load balancer to use for TKGI API VM. 1. Click **Apply Changes**. ### (Optional) Add Tags diff --git a/aws-cluster-load-balancer.html.md.erb b/aws-cluster-load-balancer.html.md.erb index f052404e8..0c0a366e8 100644 --- a/aws-cluster-load-balancer.html.md.erb +++ b/aws-cluster-load-balancer.html.md.erb @@ -3,30 +3,30 @@ title: Creating and Configuring an AWS Load Balancer for Tanzu Kubernetes Grid I owner: TKGI --- -This topic describes how to configure an Amazon Web Services (AWS) load balancer for your <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) cluster. +This topic describes how to configure an Amazon Web Services (AWS) load balancer for your VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) cluster. ## Overview A load balancer is a third-party device that distributes network and application traffic across resources. You can use a load balancer to prevent individual network components from being overloaded by high traffic. -You can also use a load balancer to secure and facilitate access to a <%= vars.k8s_runtime_abbr %> cluster from outside the network. +You can also use a load balancer to secure and facilitate access to a TKGI cluster from outside the network. -You can use an AWS <%= vars.k8s_runtime_abbr %> cluster load balancer to secure and facilitate access to a <%= vars.product_short %> cluster from outside the network. -You can also [reconfigure](#reconfigure) your AWS <%= vars.product_short %> cluster load balancers. +You can use an AWS TKGI cluster load balancer to secure and facilitate access to a Tanzu Kubernetes Grid Integrated Edition cluster from outside the network. +You can also [reconfigure](#reconfigure) your AWS Tanzu Kubernetes Grid Integrated Edition cluster load balancers. -Using an AWS <%= vars.k8s_runtime_abbr %> cluster load balancer is optional, but adding one to your Kubernetes cluster can make it easier to manage the cluster using the <%= vars.control_plane %> and `kubectl`. +Using an AWS TKGI cluster load balancer is optional, but adding one to your Kubernetes cluster can make it easier to manage the cluster using the TKGI API and `kubectl`. -For more information about the different types of load balancers used in a <%= vars.product_short %> deployment see [Load Balancers in <%= vars.k8s_runtime_abbr %>](./about-lb.html). +For more information about the different types of load balancers used in a Tanzu Kubernetes Grid Integrated Edition deployment see [Load Balancers in TKGI](./about-lb.html).

Note: If Kubernetes control plane node VMs are recreated for any reason, you must reconfigure your -AWS <%= vars.k8s_runtime_abbr %> cluster load balancers to point to the new control plane VMs.

+AWS TKGI cluster load balancers to point to the new control plane VMs.

## Prerequisite -The version of the <%= vars.k8s_runtime_abbr %> CLI you are using must match the version of the <%= vars.product_tile %> tile that you are installing. +The version of the TKGI CLI you are using must match the version of the Tanzu Kubernetes Grid Integrated Edition tile that you are installing. -

Note: Modify the example commands in this procedure to match the details of your <%= vars.product_short %> installation.

+

Note: Modify the example commands in this procedure to match the details of your Tanzu Kubernetes Grid Integrated Edition installation.

## Configure AWS Load Balancer diff --git a/aws-configure-users.html.md.erb b/aws-configure-users.html.md.erb index 20f75e90b..69d22dc9b 100644 --- a/aws-configure-users.html.md.erb +++ b/aws-configure-users.html.md.erb @@ -4,33 +4,33 @@ owner: TKGI iaas: AWS --- -This topic describes how to create <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) admin users with User Account and Authentication (UAA). +This topic describes how to create VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) admin users with User Account and Authentication (UAA). ## Overview -UAA is the identity management service for <%= vars.k8s_runtime_abbr %>. -You must use UAA to create an admin user during your initial set up of <%= vars.k8s_runtime_abbr %>. +UAA is the identity management service for TKGI. +You must use UAA to create an admin user during your initial set up of TKGI. -<%= vars.k8s_runtime_abbr %> includes a UAA server, hosted on the <%= vars.control_plane %> VM. -Use the UAA Command Line Interface (UAAC) from the <%= vars.ops_manager_full %> (<%= vars.ops_manager %>) VM to interact with the <%= vars.k8s_runtime_abbr %> UAA server. +TKGI includes a UAA server, hosted on the TKGI API VM. +Use the UAA Command Line Interface (UAAC) from the VMware Tanzu Operations Manager (Ops Manager) VM to interact with the TKGI UAA server. You can also install UAAC on a workstation and run UAAC commands from there. ## Prerequisites -Before setting up admin users for <%= vars.product_short %>, you must have one of the following: +Before setting up admin users for Tanzu Kubernetes Grid Integrated Edition, you must have one of the following: * SSH access to the Ops Manager VM -* A machine that can connect to your <%= vars.control_plane %> VM +* A machine that can connect to your TKGI API VM -## Step 1: Connect to the <%= vars.control_plane %> VM +## Step 1: Connect to the TKGI API VM -You can connect to the <%= vars.control_plane %> VM from the Ops Manager VM or from a different machine such as your local workstation. +You can connect to the TKGI API VM from the Ops Manager VM or from a different machine such as your local workstation. ### Option 1: Connect through the Ops Manager VM -You can connect to the <%= vars.control_plane %> VM by logging in to the Ops Manager VM through SSH. +You can connect to the TKGI API VM by logging in to the Ops Manager VM through SSH. To SSH into the Ops Manager VM on AWS, do the following: 1. Retrieve the key pair you used when you @@ -62,7 +62,7 @@ created the Ops Manager VM. To see the name of the key pair: ### Option 2: Connect through a Non-Ops Manager Machine -To connect to the <%= vars.control_plane %> VM and run UAA commands, do the following: +To connect to the TKGI API VM and run UAA commands, do the following: 1. Install UAAC on your machine. For example: @@ -83,25 +83,25 @@ To connect to the <%= vars.control_plane %> VM and run UAA commands, do the foll <%= partial 'uaa-admin-login' %> -##Step 3: Assign <%= vars.product_short %> Cluster Scopes +##Step 3: Assign Tanzu Kubernetes Grid Integrated Edition Cluster Scopes The `pks.clusters.manage` and `pks.clusters.admin` UAA scopes grant users the ability -to create and manage Kubernetes clusters in <%= vars.product_short %>. -For information about UAA scopes in <%= vars.product_short %>, see -[UAA Scopes for <%= vars.product_short %> Users](uaa-scopes.html). +to create and manage Kubernetes clusters in Tanzu Kubernetes Grid Integrated Edition. +For information about UAA scopes in Tanzu Kubernetes Grid Integrated Edition, see +[UAA Scopes for Tanzu Kubernetes Grid Integrated Edition Users](uaa-scopes.html). -To create <%= vars.product_short %> users with the `pks.clusters.manage` or `pks.clusters.admin` UAA scope, +To create Tanzu Kubernetes Grid Integrated Edition users with the `pks.clusters.manage` or `pks.clusters.admin` UAA scope, perform one or more of the following procedures based on the needs of your deployment: -* To assign <%= vars.k8s_runtime_abbr %> cluster scopes to an individual user, see -[Grant <%= vars.product_short %> Access to an Individual User](manage-users.html#uaa-user). - Follow this procedure if you selected **Internal UAA** when you configured **UAA** in the <%= vars.product_short %> tile. For more information, see [Installing <%= vars.product_short %> on AWS](installing-aws.html#uaa). -* To assign <%= vars.k8s_runtime_abbr %> cluster scopes to an LDAP group, see [Grant <%= vars.product_short %> Access to an External LDAP Group](manage-users.html#external-group). Follow this procedure if you selected **LDAP Server** when you configured **UAA** in the <%= vars.product_short %> tile. For more information, see [Installing <%= vars.product_short %> <%= vars.k8s_runtime_abbr %> on AWS](installing-aws.html#uaa). -* To assign <%= vars.k8s_runtime_abbr %> cluster scopes to a SAML group, see [Grant <%= vars.product_short %> Access to an External SAML Group](manage-users.html#saml). Follow this procedure if you selected **SAML Identity Provider** when you configured **UAA** in the <%= vars.product_short %> tile. For more information, see [Installing <%= vars.product_short %> <%= vars.k8s_runtime_abbr %> on AWS](installing-aws.html#uaa). -* To assign <%= vars.k8s_runtime_abbr %> cluster scopes to a client, see [Grant <%= vars.product_short %> Access to a Client](manage-users.html#uaa-client). +* To assign TKGI cluster scopes to an individual user, see +[Grant Tanzu Kubernetes Grid Integrated Edition Access to an Individual User](manage-users.html#uaa-user). + Follow this procedure if you selected **Internal UAA** when you configured **UAA** in the Tanzu Kubernetes Grid Integrated Edition tile. For more information, see [Installing Tanzu Kubernetes Grid Integrated Edition on AWS](installing-aws.html#uaa). +* To assign TKGI cluster scopes to an LDAP group, see [Grant Tanzu Kubernetes Grid Integrated Edition Access to an External LDAP Group](manage-users.html#external-group). Follow this procedure if you selected **LDAP Server** when you configured **UAA** in the Tanzu Kubernetes Grid Integrated Edition tile. For more information, see [Installing Tanzu Kubernetes Grid Integrated Edition TKGI on AWS](installing-aws.html#uaa). +* To assign TKGI cluster scopes to a SAML group, see [Grant Tanzu Kubernetes Grid Integrated Edition Access to an External SAML Group](manage-users.html#saml). Follow this procedure if you selected **SAML Identity Provider** when you configured **UAA** in the Tanzu Kubernetes Grid Integrated Edition tile. For more information, see [Installing Tanzu Kubernetes Grid Integrated Edition TKGI on AWS](installing-aws.html#uaa). +* To assign TKGI cluster scopes to a client, see [Grant Tanzu Kubernetes Grid Integrated Edition Access to a Client](manage-users.html#uaa-client). ## Next Step -After you create admin users in <%= vars.product_short %>, the admin users can create and manage -Kubernetes clusters in <%= vars.product_short %>. +After you create admin users in Tanzu Kubernetes Grid Integrated Edition, the admin users can create and manage +Kubernetes clusters in Tanzu Kubernetes Grid Integrated Edition. For more information, see [Managing Kubernetes Clusters and Workloads](managing-clusters.html). diff --git a/aws-index.html.md.erb b/aws-index.html.md.erb index b99f17027..cbbfebdaa 100644 --- a/aws-index.html.md.erb +++ b/aws-index.html.md.erb @@ -4,11 +4,11 @@ owner: Ops Manager iaas: AWS --- -The topics below describe how to install <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) on Amazon Web Services (AWS). +The topics below describe how to install VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) on Amazon Web Services (AWS). -## Install <%= vars.product_short %> on AWS +## Install Tanzu Kubernetes Grid Integrated Edition on AWS -To install <%= vars.product_short %> on AWS, follow the instructions below: +To install Tanzu Kubernetes Grid Integrated Edition on AWS, follow the instructions below: -## Install the <%= vars.k8s_runtime_abbr %> and Kubernetes CLIs +## Install the TKGI and Kubernetes CLIs -The <%= vars.k8s_runtime_abbr %> CLI and Kubernetes CLI help you interact with your <%= vars.product_short %>-provisioned Kubernetes clusters and Kubernetes workloads. +The TKGI CLI and Kubernetes CLI help you interact with your Tanzu Kubernetes Grid Integrated Edition-provisioned Kubernetes clusters and Kubernetes workloads. To install the CLIs, follow the instructions below: