diff --git a/_add-clusters-workloads.html.md.erb b/_add-clusters-workloads.html.md.erb
index 2c3b64a4b..c13caaab7 100644
--- a/_add-clusters-workloads.html.md.erb
+++ b/_add-clusters-workloads.html.md.erb
@@ -2,7 +2,7 @@
1. Add more workloads and create an additional cluster. For more information, see
About Cluster Upgrades in _Maintaining Workload Uptime_ and
Creating Clusters.
- 1. Monitor the <%= vars.product_short %> control plane in the <%= vars.product_tile %> tile > Status tab.
- Review the load and resource usage data for the <%= vars.control_plane %> and <%= vars.control_plane_db %> VMs.
+ 1. Monitor the Tanzu Kubernetes Grid Integrated Edition control plane in the Tanzu Kubernetes Grid Integrated Edition tile > Status tab.
+ Review the load and resource usage data for the TKGI API and TKGI Database VMs.
If any levels are at capacity, scale up the VMs.
diff --git a/_api.html.md.erb b/_api.html.md.erb
index cf0037a6a..c2cca38e0 100644
--- a/_api.html.md.erb
+++ b/_api.html.md.erb
@@ -1,23 +1,23 @@
Perform the following steps:
-1. Click **<%= vars.control_plane %>**.
+1. Click **TKGI API**.
-1. Under **Certificate to secure the <%= vars.control_plane %>**, provide a certificate and private key pair.
+1. Under **Certificate to secure the TKGI API**, provide a certificate and private key pair.
- 
+ 
- The certificate that you supply must cover the specific subdomain that routes to the <%= vars.control_plane %> VM with TLS termination on the ingress.
+ The certificate that you supply must cover the specific subdomain that routes to the TKGI API VM with TLS termination on the ingress.
If you use UAA as your OIDC provider, this certificate must be a proper certificate chain and have a SAN field.
Warning: TLS certificates generated for wildcard DNS records only work for a single domain level.
For example, a certificate generated for *.tkgi.EXAMPLE.com
does not permit communication to *.api.tkgi.EXAMPLE.com
.
- If the certificate does not contain the correct FQDN for the <%= vars.control_plane %>, calls to the API will fail.
Note: If you deployed a global HTTP load balancer for Ops Manager without a certificate, you can configure the load balancer to use this newly-generated certificate. @@ -26,14 +26,14 @@ Perform the following steps: Preparing to Deploy Ops Manager on GCP Manually.
<% else %> <% end %> -1. Under **API Hostname (FQDN)**, enter the FQDN that you registered to point to the <%= vars.control_plane %> load balancer, such as `api.tkgi.example.com`. -To retrieve the public IP address or FQDN of the <%= vars.control_plane %> load balancer, +1. Under **API Hostname (FQDN)**, enter the FQDN that you registered to point to the TKGI API load balancer, such as `api.tkgi.example.com`. +To retrieve the public IP address or FQDN of the TKGI API load balancer, log in to your IaaS console. -Note: The FQDN for the <%= vars.k8s_runtime_abbr %> API must not contain uppercase letters or trailing whitespace.
+Note: The FQDN for the TKGI API must not contain uppercase letters or trailing whitespace.
1. Under **Worker VM Max in Flight**, enter the maximum number of non-canary worker instances to create, update or upgrade in parallel within an availability zone.Note: You must specify the Balance other jobs in AZ, but the selection has no effect in the current version of <%= vars.product_short %>. +1. Under **Balance other jobs in**, select the AZ for balancing other Tanzu Kubernetes Grid Integrated Edition control plane jobs. +
Note: You must specify the Balance other jobs in AZ, but the selection has no effect in the current version of Tanzu Kubernetes Grid Integrated Edition.
-1. Under **Network**, select the infrastructure subnet that you created for <%= vars.product_short %> component VMs, such as the <%= vars.control_plane %> and <%= vars.control_plane_db %> VMs. +1. Under **Network**, select the infrastructure subnet that you created for Tanzu Kubernetes Grid Integrated Edition component VMs, such as the TKGI API and TKGI Database VMs. 1. Under **Service Network**, select the services subnet that you created for Kubernetes cluster VMs. 1. Click **Save**. diff --git a/_bbr-supported-components.html.md.erb b/_bbr-supported-components.html.md.erb index 9457791cd..8850733b6 100644 --- a/_bbr-supported-components.html.md.erb +++ b/_bbr-supported-components.html.md.erb @@ -1,6 +1,6 @@ BBR can back up the following components: * BOSH Director -* <%= vars.product_short %> control plane API VM and its ETCD database -* <%= vars.product_short %> control plane database VM (MySQL) -* <%= vars.product_short %> cluster data, from the clusters' ETCD databases +* Tanzu Kubernetes Grid Integrated Edition control plane API VM and its ETCD database +* Tanzu Kubernetes Grid Integrated Edition control plane database VM (MySQL) +* Tanzu Kubernetes Grid Integrated Edition cluster data, from the clusters' ETCD databases diff --git a/_bosh-ssh-api.html.md.erb b/_bosh-ssh-api.html.md.erb index c91c6ec74..4d99cb2a9 100644 --- a/_bosh-ssh-api.html.md.erb +++ b/_bosh-ssh-api.html.md.erb @@ -1,6 +1,6 @@ 1. Log in to the BOSH Director. For instructions, see [Log in to the BOSH Director VM](diagnostic-tools.html#alias). -1. To identify your <%= vars.k8s_runtime_abbr %> deployment name, run the following command: +1. To identify your TKGI deployment name, run the following command: ``` bosh -e ENVIRONMENT deployments @@ -12,10 +12,10 @@ ```console $ bosh -e tkgi deployments ``` - Your <%= vars.k8s_runtime_abbr %> deployment name begins with `pivotal-container-service` and includes + Your TKGI deployment name begins with `pivotal-container-service` and includes a BOSH-generated identifier. -1. To identify your <%= vars.control_plane %> VM name, run the following command: +1. To identify your TKGI API VM name, run the following command: ``` bosh -e ENVIRONMENT -d DEPLOYMENT vms @@ -24,20 +24,20 @@ Where: * `ENVIRONMENT` is the BOSH environment alias. - * `DEPLOYMENT` is your <%= vars.k8s_runtime_abbr %> deployment name. + * `DEPLOYMENT` is your TKGI deployment name. For example: ```console $ bosh -e tkgi -d pivotal-container-service-a1b2c333d444e5f66a77 vms ``` - Your <%= vars.control_plane %> VM name begins with `pivotal-container-service` and includes a + Your TKGI API VM name begins with `pivotal-container-service` and includes a BOSH-generated identifier. -Note: The <%= vars.control_plane %> VM identifier is different from the identifier in your <%= vars.k8s_runtime_abbr %> +
Note: The TKGI API VM identifier is different from the identifier in your TKGI deployment name.
-1. To SSH into the <%= vars.control_plane %> VM: +1. To SSH into the TKGI API VM: ``` bosh -e ENVIRONMENT -d DEPLOYMENT ssh TKGI-API-VM @@ -46,8 +46,8 @@ Where: * `ENVIRONMENT` is the BOSH environment alias. - * `DEPLOYMENT` is your <%= vars.k8s_runtime_abbr %> deployment name. - * `TKGI-API-VM` is your <%= vars.control_plane %> VM name. + * `DEPLOYMENT` is your TKGI deployment name. + * `TKGI-API-VM` is your TKGI API VM name. For example: ```console diff --git a/_bosh-ssh-db.html.md.erb b/_bosh-ssh-db.html.md.erb index c7ee618a9..bafd1fc03 100644 --- a/_bosh-ssh-db.html.md.erb +++ b/_bosh-ssh-db.html.md.erb @@ -1,6 +1,6 @@ 1. Log in to the BOSH Director. For instructions, see [Log in to the BOSH Director VM](diagnostic-tools.html#alias). -1. To identify your <%= vars.k8s_runtime_abbr %> deployment name: +1. To identify your TKGI deployment name: ``` bosh -e ENVIRONMENT deployments @@ -12,10 +12,10 @@ ```console $ bosh -e tkgi deployments ``` - Your <%= vars.k8s_runtime_abbr %> deployment name begins with `pivotal-container-service` and includes + Your TKGI deployment name begins with `pivotal-container-service` and includes a BOSH-generated identifier. -1. To identify your <%= vars.control_plane_db %> VM names: +1. To identify your TKGI Database VM names: ``` bosh -e ENVIRONMENT -d DEPLOYMENT vms @@ -24,18 +24,18 @@ Where: * `ENVIRONMENT` is the BOSH environment alias. - * `DEPLOYMENT` is your <%= vars.k8s_runtime_abbr %> deployment name. + * `DEPLOYMENT` is your TKGI deployment name. For example: ```console $ bosh -e tkgi -d pivotal-container-service-a1b2c333d444e5f66a77 vms ``` - Your <%= vars.control_plane_db %> VM names begin with `pks-db` and include a + Your TKGI Database VM names begin with `pks-db` and include a BOSH-generated identifier. -1. Choose one of the returned <%= vars.control_plane_db %> VMs as the database VM to SSH into. -1. To SSH into the selected <%= vars.control_plane_db %> VM, run the following command: +1. Choose one of the returned TKGI Database VMs as the database VM to SSH into. +1. To SSH into the selected TKGI Database VM, run the following command: ``` bosh -e ENVIRONMENT -d DEPLOYMENT ssh TKGI-DB-VM @@ -44,8 +44,8 @@ Where: * `ENVIRONMENT` is the BOSH environment alias. - * `DEPLOYMENT` is your <%= vars.k8s_runtime_abbr %> deployment name. - * `TKGI-DB-VM` is the name of the <%= vars.control_plane_db %> VM to SSH into. + * `DEPLOYMENT` is your TKGI deployment name. + * `TKGI-DB-VM` is the name of the TKGI Database VM to SSH into. For example: ```console diff --git a/_cloud-provider.html.md.erb b/_cloud-provider.html.md.erb index 1f8a2f72f..450e0b88f 100644 --- a/_cloud-provider.html.md.erb +++ b/_cloud-provider.html.md.erb @@ -1,4 +1,4 @@ -In the procedure below, you use credentials for vCenter master VMs. You must have provisioned the service account with the correct permissions. For more information, see [Create the Master Node Service Account](vsphere-prepare-env.html#create-master) in _Preparing vSphere Before Deploying <%= vars.product_short %>_. +In the procedure below, you use credentials for vCenter master VMs. You must have provisioned the service account with the correct permissions. For more information, see [Create the Master Node Service Account](vsphere-prepare-env.html#create-master) in _Preparing vSphere Before Deploying Tanzu Kubernetes Grid Integrated Edition_. To configure your Kubernetes cloud provider settings, follow the procedure below: @@ -7,7 +7,7 @@ To configure your Kubernetes cloud provider settings, follow the procedure belowWarning: The vSphere Container Storage Plug-in will not function if you do not specify the domain name for active directory users.
1. Enter your **vCenter Host**. For example, `vcenter-example.com`.Note: The FQDN for the vCenter Server cannot contain uppercase letters.
@@ -16,7 +16,7 @@ To configure your Kubernetes cloud provider settings, follow the procedure below Populate **Datastore Name** with the Persistent Datastore name configured in your **BOSH Director** tile under **vCenter Config** > **Persistent Datastore Names**. Enter only a single Persistent datastore in the **Datastore Name** field. - - The vSphere datastore type must be Datastore. <%= vars.product_short %> does not support the use of vSphere Datastore Clusters with or without Storage DRS. For more information, see Datastores and Datastore Clusters in the vSphere documentation. + - The vSphere datastore type must be Datastore. Tanzu Kubernetes Grid Integrated Edition does not support the use of vSphere Datastore Clusters with or without Storage DRS. For more information, see Datastores and Datastore Clusters in the vSphere documentation. - The Datastore Name is the default datastore used if the Kubernetes clusterStorageClass
does not define a StoragePolicy
. Do not enter a datastore that is a list of BOSH Job/VMDK datastores. For more information, see PersistentVolume Storage Options on vSphere.
- For multi-AZ and multi-cluster environments, your Datastore Name must be a shared Persistent datastore available to each vSphere cluster. Do not enter a datastore that is local to a single cluster. For more information, see PersistentVolume Storage Options on vSphere.
diff --git a/_cluster-monitoring.html.md.erb b/_cluster-monitoring.html.md.erb
index 19aed72b2..7577eb94b 100644
--- a/_cluster-monitoring.html.md.erb
+++ b/_cluster-monitoring.html.md.erb
@@ -37,7 +37,7 @@ To use Wavefront with Windows worker-based clusters, developers must install Wav
To enable and configure Wavefront monitoring:
-1. In the <%= vars.product_tile %> tile, select **In-Cluster Monitoring**.
+1. In the Tanzu Kubernetes Grid Integrated Edition tile, select **In-Cluster Monitoring**.
1. Under **Wavefront Integration**, select **Yes**.
1. Under **Wavefront URL**, enter the URL of your Wavefront subscription. For example:
```console
@@ -47,14 +47,14 @@ To enable and configure Wavefront monitoring:
1. (Optional) For installations that require a proxy server for outbound Internet access, enable access by entering values for **HTTP Proxy Host**, **HTTP Proxy Port**, **Proxy username**, and **Proxy password**.
1. Click **Save**.
-The <%= vars.product_tile %> tile does not validate your Wavefront configuration settings. To verify your setup, look for cluster and pod metrics in Wavefront.
+The Tanzu Kubernetes Grid Integrated Edition tile does not validate your Wavefront configuration settings. To verify your setup, look for cluster and pod metrics in Wavefront.
<% if current_page.data.iaas == "vSphere" || current_page.data.iaas == "vSphere-NSX-T" %>
#### VMware vRealize Operations Management Pack for Container Monitoring
-You can monitor <%= vars.product_short %> Kubernetes clusters with VMware vRealize Operations Management Pack for Container Monitoring.
+You can monitor Tanzu Kubernetes Grid Integrated Edition Kubernetes clusters with VMware vRealize Operations Management Pack for Container Monitoring.
-To integrate <%= vars.product_short %> with VMware vRealize Operations Management Pack for Container Monitoring, you must deploy a container running [cAdvisor](https://github.com/google/cadvisor) in your <%= vars.k8s_runtime_abbr %> deployment.
+To integrate Tanzu Kubernetes Grid Integrated Edition with VMware vRealize Operations Management Pack for Container Monitoring, you must deploy a container running [cAdvisor](https://github.com/google/cadvisor) in your TKGI deployment.
cAdvisor is an open source tool that provides monitoring and statistics for Kubernetes clusters.
@@ -64,7 +64,7 @@ To deploy a cAdvisor container:
1. Under **Deploy cAdvisor**, select **Yes**.
1. Click **Save**.
-For more information about integrating this type of monitoring with <%= vars.k8s_runtime_abbr %>, see the [VMware vRealize Operations Management Pack for Container Monitoring User Guide](https://docs.vmware.com/en/Management-Packs-for-vRealize-Operations-Manager/1.4/container-monitoring/GUID-BD6B5510-4A16-412D-B5AD-43F74C300C91.html) and [Release Notes](https://docs.vmware.com/en/Management-Packs-for-vRealize-Operations-Manager/1.4/rn/Container-Monitoring-Release-Notes.html) in the VMware documentation.
+For more information about integrating this type of monitoring with TKGI, see the [VMware vRealize Operations Management Pack for Container Monitoring User Guide](https://docs.vmware.com/en/Management-Packs-for-vRealize-Operations-Manager/1.4/container-monitoring/GUID-BD6B5510-4A16-412D-B5AD-43F74C300C91.html) and [Release Notes](https://docs.vmware.com/en/Management-Packs-for-vRealize-Operations-Manager/1.4/rn/Container-Monitoring-Release-Notes.html) in the VMware documentation.
<% else %>
#### cAdvisor
@@ -95,11 +95,11 @@ To enable clusters to send Kubernetes node metrics and pod metrics to metric
sinks:
1. In **In-Cluster Monitoring**, select **Enable Metric Sink Resources**.
-If you enable this check box, <%= vars.product_short %> deploys Telegraf as a
+If you enable this check box, Tanzu Kubernetes Grid Integrated Edition deploys Telegraf as a
`DaemonSet`, a pod that runs on each worker node in all your Kubernetes clusters.
1. (Optional) To enable Node Exporter to send worker node metrics to metric
sinks of kind `ClusterMetricSink`, select **Enable node exporter on workers**.
-If you enable this check box, <%= vars.product_short %> deploys Node Exporter as
+If you enable this check box, Tanzu Kubernetes Grid Integrated Edition deploys Node Exporter as
a `DaemonSet`, a pod that runs on each worker node in all your Kubernetes
clusters.
@@ -119,7 +119,7 @@ _Monitoring Workers and Workloads_.
To enable clusters to send Kubernetes API events and pod logs to log sinks:
1. Select **Enable Log Sink Resources**. If you enable this check box,
-<%= vars.product_short %> deploys Fluent Bit as a `DaemonSet`, a pod that runs
+Tanzu Kubernetes Grid Integrated Edition deploys Fluent Bit as a `DaemonSet`, a pod that runs
on each worker node in all your Kubernetes clusters.
1. (Optional) To increase the Fluent Bit Pod memory limit, enter a value greater than 100 in the **Fluent-bit container memory limit(Mi)** field.
diff --git a/_console-usage-data.html.md.erb b/_console-usage-data.html.md.erb
index c0e9c62fe..7cb377381 100644
--- a/_console-usage-data.html.md.erb
+++ b/_console-usage-data.html.md.erb
@@ -14,13 +14,13 @@ To configure VMware's Customer Experience Improvement Program (CEIP), do the fol
* Your entitlement account number or Tanzu customer number.
If you are a VMware customer, you can find your entitlement account number in your **Account Summary** on [my.vmware.com](https://my.vmware.com).
If you are a Pivotal customer, you can find your Pivotal Customer Number in your Pivotal Order Confirmation email.
- * A descriptive name for your <%= vars.k8s_runtime_abbr %> installation.
+ * A descriptive name for your TKGI installation.
The label you assign to this installation will be used in the reports to identify the environment.
1. To provide information about the purpose for this installation, select an option.

1. Click **Save**.
-Note: If you join the CEIP Program for <%= vars.product_short %>, open your firewall to allow outgoing access to +
Note: If you join the CEIP Program for Tanzu Kubernetes Grid Integrated Edition, open your firewall to allow outgoing access to
https://vcsa.vmware.com/ph
on port 443
.
Note: Even if you do not wish to participate in CIEP, <%= vars.product_short %>-provisioned clusters send usage data to the <%= vars.k8s_runtime_abbr %> control plane. - However, this data is not sent to VMware and remains on your <%= vars.product_short %> installation.
+Note: Even if you do not wish to participate in CIEP, Tanzu Kubernetes Grid Integrated Edition-provisioned clusters send usage data to the TKGI control plane. + However, this data is not sent to VMware and remains on your Tanzu Kubernetes Grid Integrated Edition installation.
diff --git a/_create-auth-token-var.html.md.erb b/_create-auth-token-var.html.md.erb index 9ed6b58ee..224091b4f 100644 --- a/_create-auth-token-var.html.md.erb +++ b/_create-auth-token-var.html.md.erb @@ -6,9 +6,9 @@ ``` Where: - * `TKGI-API` is the FQDN of your <%= vars.control_plane %> endpoint. For example, `api.tkgi.example.com`. - * `USER-ID` is your <%= vars.product_short %> user ID. - * `PASSWORD` is your <%= vars.product_short %> password. + * `TKGI-API` is the FQDN of your TKGI API endpoint. For example, `api.tkgi.example.com`. + * `USER-ID` is your Tanzu Kubernetes Grid Integrated Edition user ID. + * `PASSWORD` is your Tanzu Kubernetes Grid Integrated Edition password. * `YOUR-ACCESS-TOKEN` is the name of your access token environment variable. For example: diff --git a/_errands.html.md.erb b/_errands.html.md.erb index 01a7b720a..01c39f016 100644 --- a/_errands.html.md.erb +++ b/_errands.html.md.erb @@ -1,7 +1,7 @@ Errands are scripts that run at designated points during an installation. To configure which post-deploy and pre-delete errands run for -<%= vars.product_short %>: +Tanzu Kubernetes Grid Integrated Edition: 1. Make a selection in the dropdown next to each errand. <% if current_page.data.iaas == "vSphere-NSX-T" %> @@ -21,39 +21,39 @@ To configure which post-deploy and pre-delete errands run for <% end %> 1. (Optional) Set the **Run smoke tests** errand to **On**. - The Smoke Test errand smoke tests the <%= vars.k8s_runtime_abbr %> upgrade by creating and deleting a test Kubernetes cluster. + The Smoke Test errand smoke tests the TKGI upgrade by creating and deleting a test Kubernetes cluster. If test cluster creation or deletion fails, the errand fails, and the installation of the - <%= vars.k8s_runtime_abbr %> tile halts. + TKGI tile halts. <% if current_page.data.iaas == "vSphere-NSX-T" %> - The errand uses the <%= vars.k8s_runtime_abbr %> CLI to create the test cluster configured using either - the configuration settings on the <%= vars.k8s_runtime_abbr %> tile - the default, or a network profile. + The errand uses the TKGI CLI to create the test cluster configured using either + the configuration settings on the TKGI tile - the default, or a network profile. -1. (Optional) To configure the Smoke Test errand to use a network profile instead of the default configuration settings on the <%= vars.k8s_runtime_abbr %> tile: +1. (Optional) To configure the Smoke Test errand to use a network profile instead of the default configuration settings on the TKGI tile: * Create a network profile with your preferred smoke test settings. * Configure **Errand Settings** > **Smoke tests - Network Profile Name** with the network profile name.Warning: If you have <%= vars.k8s_runtime_abbr %>-provisioned Windows worker clusters, - do not activate the Upgrade all clusters errand before upgrading to the <%= vars.k8s_runtime_abbr %> v1.17 tile. +
Warning: If you have TKGI-provisioned Windows worker clusters, + do not activate the Upgrade all clusters errand before upgrading to the TKGI v1.17 tile. You cannot use the Upgrade all clusters errand because you must manually migrate each individual Windows worker cluster to the CSI Driver for vSphere. For more information, see Configure vSphere CSI for Windows in Deploying and Managing Cloud Native Storage (CNS) on vSphere.
<% end %>Note: <%= vars.recommended_by %> recommends that you
review the VMware Tanzu Network metadata and confirm stemcell version compatibility before using
diff --git a/_global-proxy.html.md.erb b/_global-proxy.html.md.erb
index 3b138f1e4..e75be9dce 100644
--- a/_global-proxy.html.md.erb
+++ b/_global-proxy.html.md.erb
@@ -1,15 +1,15 @@
-1. (Optional) Configure <%= vars.product_short %> to use a proxy.
+1. (Optional) Configure Tanzu Kubernetes Grid Integrated Edition to use a proxy.
Production environments can deny direct access to public Internet services and between internal services by placing an HTTP or HTTPS proxy in the network path between Kubernetes nodes and those services.
-Configure <%= vars.product_short %> to use your proxy and activate the following:
- * <%= vars.control_plane %> access to public Internet services and other internal services.
- * <%= vars.product_short %>-deployed Kubernetes nodes access to public Internet services and other internal services.
- * <%= vars.product_short %> Telemetry ability to forward Telemetry data to the CEIP and Telemetry program.
+Configure Tanzu Kubernetes Grid Integrated Edition to use your proxy and activate the following:
+ * TKGI API access to public Internet services and other internal services.
+ * Tanzu Kubernetes Grid Integrated Edition-deployed Kubernetes nodes access to public Internet services and other internal services.
+ * Tanzu Kubernetes Grid Integrated Edition Telemetry ability to forward Telemetry data to the CEIP and Telemetry program.
Note: This setting does not set the proxy for running Kubernetes workloads or pods.
1. To complete your global proxy configuration for all outgoing HTTP/HTTPS traffic from your Kubernetes clusters, perform the following steps: @@ -27,16 +27,16 @@ Configure <%= vars.product_short %> to use your proxy and activate the following 1. (Optional) If your HTTPS proxy uses basic authentication, enter the user name and password in the **HTTPS Proxy Credentials** fields. 1. Under **No Proxy**, enter the comma-separated list of IP addresses that must bypass the proxy to - allow for internal <%= vars.product_short %> communication. + allow for internal Tanzu Kubernetes Grid Integrated Edition communication.169.254.169.254
, 10.100.0.0/8
and 10.200.0.0/8
IP address ranges,
.internal
, .svc
,.svc.cluster.local
, .svc.cluster
,
- and your <%= vars.product_short %> FQDN are not proxied. This allows internal <%= vars.product_short %> communication.
+ and your Tanzu Kubernetes Grid Integrated Edition FQDN are not proxied. This allows internal Tanzu Kubernetes Grid Integrated Edition communication.
_
character in the No Proxy field. Entering an
underscore character in this field can cause upgrades to fail.
@@ -93,7 +93,7 @@ Configure <%= vars.product_short %> to use your proxy and activate the following
10.100.0.0/8
and 10.200.0.0/8
IP address ranges,
.internal
, .svc
,.svc.cluster.local
, .svc.cluster
,
- and your <%= vars.product_short %> FQDN are not proxied. This allows internal <%= vars.product_short %> communication.
+ and your Tanzu Kubernetes Grid Integrated Edition FQDN are not proxied. This allows internal Tanzu Kubernetes Grid Integrated Edition communication.
_
character in the No Proxy field. Entering an
underscore character in this field can cause upgrades to fail.
diff --git a/_harbor.html.md.erb b/_harbor.html.md.erb
index 4e69a3711..0f1129f0f 100644
--- a/_harbor.html.md.erb
+++ b/_harbor.html.md.erb
@@ -1 +1 @@
-Integrate VMware Harbor with <%= vars.product_short %> to store and manage container images. For more information, see [Integrating VMware Harbor Registry with <%= vars.product_short %>](https://docs.vmware.com/en/VMware-Harbor-Registry/services/vmware-harbor-registry/GUID-integrating-pks.html).
+Integrate VMware Harbor with Tanzu Kubernetes Grid Integrated Edition to store and manage container images. For more information, see [Integrating VMware Harbor Registry with Tanzu Kubernetes Grid Integrated Edition](https://docs.vmware.com/en/VMware-Harbor-Registry/services/vmware-harbor-registry/GUID-integrating-pks.html).
diff --git a/_host-monitoring.html.md.erb b/_host-monitoring.html.md.erb
index 4b61d91f8..76274ccce 100644
--- a/_host-monitoring.html.md.erb
+++ b/_host-monitoring.html.md.erb
@@ -24,18 +24,18 @@ You can configure one or more of the following:
* **VMware vRealize Log Insight (vRLI) Integration**: To configure VMware vRealize Log Insight (vRLI) Integration, see [VMware vRealize Log Insight Integration](#vrealize-logs) below.
The vRLI integration pulls logs from all BOSH jobs and containers running in the cluster, including node logs from core Kubernetes and BOSH processes, Kubernetes event logs, and pod `stdout` and `stderr`.
<% end %>
-* **Telegraf**: To configure Telegraf, see [Configuring Telegraf in <%= vars.k8s_runtime_abbr %>](monitor-etcd.html). The Telegraf agent sends metrics from TKGI API, control plane node, and worker node VMs to a monitoring service, such as Wavefront or Datadog.
+* **Telegraf**: To configure Telegraf, see [Configuring Telegraf in TKGI](monitor-etcd.html). The Telegraf agent sends metrics from TKGI API, control plane node, and worker node VMs to a monitoring service, such as Wavefront or Datadog.
For more information about these components, see
-[Monitoring <%= vars.k8s_runtime_abbr %> and <%= vars.k8s_runtime_abbr %>-Provisioned Clusters](host-monitoring.html).
+[Monitoring TKGI and TKGI-Provisioned Clusters](host-monitoring.html).
#### Syslog
-To configure Syslog for all BOSH-deployed VMs in <%= vars.product_short %>:
+To configure Syslog for all BOSH-deployed VMs in Tanzu Kubernetes Grid Integrated Edition:
1. Click **Host Monitoring**.
-1. Under **Enable Syslog for <%= vars.k8s_runtime_abbr %>**, select **Yes**.
+1. Under **Enable Syslog for TKGI**, select **Yes**.
1. Under **Address**, enter the destination syslog endpoint.
1. Under **Port**, enter the destination syslog port.
1. Under **Transport Protocol**, select a transport protocol for log forwarding.
diff --git a/_increase_persistent_disk.html.md.erb b/_increase_persistent_disk.html.md.erb
index 17fd15b22..a1e6ebd6a 100644
--- a/_increase_persistent_disk.html.md.erb
+++ b/_increase_persistent_disk.html.md.erb
@@ -2,7 +2,7 @@
### Storage Requirements for Large Numbers of Pods
If you expect the cluster workload to run a large number of pods continuously,
-then increase the size of persistent disk storage allocated to the <%= vars.control_plane_db %> VM as follows:
+then increase the size of persistent disk storage allocated to the TKGI Database VM as follows:
Configure Pod Security Admission. | -Configure cluster-specific PSA in <%= vars.k8s_runtime_abbr %>. For more information, see Pod Security Admission in a <%= vars.k8s_runtime_abbr %> Cluster in Pod Security Admission in <%= vars.k8s_runtime_abbr %>. | +Configure cluster-specific PSA in TKGI. For more information, see Pod Security Admission in a TKGI Cluster in Pod Security Admission in TKGI. |
Note: After you click Apply Changes for the first time, -BOSH assigns the <%= vars.control_plane %> VM an IP address. BOSH uses the name you provide in the LOAD BALANCERS field -to locate your load balancer and then connect the load balancer to the <%= vars.control_plane %> VM using its new IP address.
+BOSH assigns the TKGI API VM an IP address. BOSH uses the name you provide in the LOAD BALANCERS field +to locate your load balancer and then connect the load balancer to the TKGI API VM using its new IP address. diff --git a/_login-api.html.md.erb b/_login-api.html.md.erb index bd6574a41..73e980708 100644 --- a/_login-api.html.md.erb +++ b/_login-api.html.md.erb @@ -4,9 +4,9 @@ ``` Where: - * `TKGI-API` is the domain name for the <%= vars.control_plane %> that you entered in **Ops Manager** > **<%= vars.product_tile %>** > **<%= vars.control_plane %>** > **API Hostname (FQDN)**. + * `TKGI-API` is the domain name for the TKGI API that you entered in **Ops Manager** > **Tanzu Kubernetes Grid Integrated Edition** > **TKGI API** > **API Hostname (FQDN)**. For example, `api.tkgi.example.com`. * `USERNAME` is your user name.Note: - Antrea is not supported for the <%= vars.k8s_runtime_abbr %> Windows-worker on vSphere without NSX beta feature.
+ Antrea is not supported for the TKGI Windows-worker on vSphere without NSX beta feature. 1. (Optional) Enter values for **Kubernetes Pod Network CIDR Range** and **Kubernetes Service Network CIDR Range**. * For Windows worker-based clusters the **Kubernetes Service Network CIDR Range** setting must be `10.220.0.0/16`.Note: vSphere on Flannel does not support networking Windows containers. @@ -24,7 +24,7 @@ To configure networking, do the following: <%= vars.recommended_by %> recommends that you switch your Flannel CNI-configured clusters to the Antrea CNI. For more information about Flannel CNI deprecation, see About Switching from the Flannel CNI to the Antrea CNI - in About <%= vars.product_short %> Upgrades. + in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
1. (Optional) Enter values for **Kubernetes Pod Network CIDR Range** and **Kubernetes Service Network CIDR Range**. * Ensure that the CIDR ranges do not overlap and have sufficient space for your deployed services. diff --git a/_nsx-t-ingress-lb-overview.html.md.erb b/_nsx-t-ingress-lb-overview.html.md.erb index 6162f8f23..4e39032f0 100644 --- a/_nsx-t-ingress-lb-overview.html.md.erb +++ b/_nsx-t-ingress-lb-overview.html.md.erb @@ -2,17 +2,17 @@ The NSX Load Balancer is a logical load balancer that handles a number of functions using virtual servers and pools. The NSX load balancer creates a load balancer service for each Kubernetes cluster provisioned -by <%= vars.product_short %> with NSX. For each load balancer service, NCP, by way of the Kubernetes CustomResourceDefinition (CRD), +by Tanzu Kubernetes Grid Integrated Edition with NSX. For each load balancer service, NCP, by way of the Kubernetes CustomResourceDefinition (CRD), creates corresponding NSXLoadBalancerMonitor objects. -By default <%= vars.product_short %> deploys the following NSX virtual servers for each Kubernetes cluster: +By default Tanzu Kubernetes Grid Integrated Edition deploys the following NSX virtual servers for each Kubernetes cluster: * One TCP layer 4 load balancer virtual server for the Kubernetes API server. * One TCP layer 4 auto-scaled load balancer virtual server for **each** Kubernetes service resource of `type: LoadBalancer`. * Two HTTP/HTTPS layer 7 ingress routing virtual servers. These virtual server are attached to the Kubernetes Ingress Controller cluster load balancer service and can be manually scaled. -<%= vars.product_short %> uses Kubernetes custom resources to +Tanzu Kubernetes Grid Integrated Edition uses Kubernetes custom resources to monitor the state of the NSX load balancer service and scale the virtual servers created for ingress. <% if current_page.data.lbtype == "monitor" %> diff --git a/_other-super-certificates.html.md.erb b/_other-super-certificates.html.md.erb index 608f3c341..670717df8 100644 --- a/_other-super-certificates.html.md.erb +++ b/_other-super-certificates.html.md.erb @@ -2,24 +2,24 @@ To create, delete, and modify NSX networking resources, <%= vars.platform_name % Users configure <%= vars.platform_name %> to authenticate to NSX Manager for different purposes in different tiles: -* **<%= vars.product_short %> tile**Note: Before configuring your Windows worker plan, you must first activate and configure Plan 1. -See Plans in Installing <%= vars.product_short %> on vSphere with NSX for more information. +See Plans in Installing Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX for more information.
<% else %> A plan defines a set of resource types used for deploying a cluster. @@ -54,7 +54,7 @@ You must activate and configure either **Plan 11**, **Plan 12**, or **Plan 13** <% end %> 1. Under **Name**, provide a unique name for the plan. 1. Under **Description**, edit the description as needed. -The plan description appears in the Services Marketplace, which developers can access by using the <%= vars.k8s_runtime_abbr %> CLI. +The plan description appears in the Services Marketplace, which developers can access by using the TKGI CLI. <% if current_page.data.windowsclusters == true %> 1. Select **Enable HA Linux workers** to activate high availability Linux worker clusters. A high availability Linux worker cluster consists of three Linux worker nodes. @@ -65,19 +65,19 @@ A high availability Linux worker cluster consists of three Linux worker nodes. You can enter1
, 3
, or 5
.
Note: If you deploy a cluster with multiple control plane/etcd node VMs,
confirm that you have sufficient hardware to handle the increased load on disk write and network traffic. For more information, see Hardware recommendations in the etcd documentation.
- In addition to meeting the hardware requirements for a multi-control plane node cluster, we recommend configuring monitoring for etcd to monitor disk latency, network latency, and other indicators for the health of the cluster. For more information, see Configuring Telegraf in <%= vars.k8s_runtime_abbr %>.
WARNING: To change the number of control plane/etcd nodes for a plan, you must ensure that no existing clusters use the plan. <%= vars.product_short %> does not support changing the number of control plane/etcd nodes for plans with existing clusters. + In addition to meeting the hardware requirements for a multi-control plane node cluster, we recommend configuring monitoring for etcd to monitor disk latency, network latency, and other indicators for the health of the cluster. For more information, see Configuring Telegraf in TKGI.
+WARNING: To change the number of control plane/etcd nodes for a plan, you must ensure that no existing clusters use the plan. Tanzu Kubernetes Grid Integrated Edition does not support changing the number of control plane/etcd nodes for plans with existing clusters.
-1. Under **Master/ETCD VM Type**, select the type of VM to use for Kubernetes control plane/etcd nodes. For more information, including control plane node VM customization options, see the [Control Plane Node VM Size](vm-sizing.html#master-sizing) section of _VM Sizing for <%= vars.product_short %> Clusters_. +1. Under **Master/ETCD VM Type**, select the type of VM to use for Kubernetes control plane/etcd nodes. For more information, including control plane node VM customization options, see the [Control Plane Node VM Size](vm-sizing.html#master-sizing) section of _VM Sizing for Tanzu Kubernetes Grid Integrated Edition Clusters_. 1. Under **Master Persistent Disk Type**, select the size of the persistent disk for the Kubernetes control plane node VM. -1. Under **Master/ETCD Availability Zones**, select one or more AZs for the Kubernetes clusters deployed by <%= vars.product_short %>. -If you select more than one AZ, <%= vars.product_short %> deploys the control plane VM in the first AZ and the worker VMs across the remaining AZs. -If you are using multiple control plane nodes, <%= vars.product_short %> deploys the control plane and worker VMs across the AZs in round-robin fashion. -Note: <%= vars.product_short %> does not support changing the AZs of existing control plane nodes.
+1. Under **Master/ETCD Availability Zones**, select one or more AZs for the Kubernetes clusters deployed by Tanzu Kubernetes Grid Integrated Edition. +If you select more than one AZ, Tanzu Kubernetes Grid Integrated Edition deploys the control plane VM in the first AZ and the worker VMs across the remaining AZs. +If you are using multiple control plane nodes, Tanzu Kubernetes Grid Integrated Edition deploys the control plane and worker VMs across the AZs in round-robin fashion. +Note: Tanzu Kubernetes Grid Integrated Edition does not support changing the AZs of existing control plane nodes.
1. Under **Maximum number of workers on a cluster**, set the maximum number of -Kubernetes worker node VMs that <%= vars.product_short %> can deploy for each cluster. Enter any whole number in this field. +Kubernetes worker node VMs that Tanzu Kubernetes Grid Integrated Edition can deploy for each cluster. Enter any whole number in this field.Note: Changing a plan's Worker Node Instances setting does not alter the number of worker nodes on existing clusters. For information about scaling an existing cluster, see - [Scale Horizontally by Changing the Number of Worker Nodes Using the <%= vars.k8s_runtime_abbr %> CLI](scale-clusters.html#scale-horizontal) + [Scale Horizontally by Changing the Number of Worker Nodes Using the TKGI CLI](scale-clusters.html#scale-horizontal) in _Scaling Existing Clusters_.
1. Under **Worker VM Type**, select the type of VM to use for Kubernetes worker node VMs. For more information, including worker node VM customization options, -see [Worker Node VM Number and Size](vm-sizing.html#worker-sizing) in _VM Sizing for <%= vars.product_short %> Clusters_. +see [Worker Node VM Number and Size](vm-sizing.html#worker-sizing) in _VM Sizing for Tanzu Kubernetes Grid Integrated Edition Clusters_. <% if current_page.data.iaas != "GCP" %>Note: - <%= vars.product_short %> requires a Worker VM Type with an ephemeral disk size of 32 GB or more. + Tanzu Kubernetes Grid Integrated Edition requires a Worker VM Type with an ephemeral disk size of 32 GB or more.
<% end %> <% if current_page.data.windowsclusters == true %> @@ -114,7 +114,7 @@ see [Worker Node VM Number and Size](vm-sizing.html#worker-sizing) in _VM Sizing 1. Under **Worker Persistent Disk Type**, select the size of the persistent disk for the Kubernetes worker node VMs. <% end %> -1. Under **Worker Availability Zones**, select one or more AZs for the Kubernetes worker nodes. <%= vars.product_short %> deploys worker nodes equally across the AZs you select. +1. Under **Worker Availability Zones**, select one or more AZs for the Kubernetes worker nodes. Tanzu Kubernetes Grid Integrated Edition deploys worker nodes equally across the AZs you select. 1. Under **Kubelet customization - system-reserved**, enter resource values that Kubelet can use to reserve resources for system daemons. For example, `memory=250Mi, cpu=150m`. For more information about system-reserved values, @@ -129,7 +129,7 @@ see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-c <% if current_page.data.windowsclusters == true %> 1. Under **Kubelet customization - Windows pause image location**, enter the location of your Windows pause image. The **Kubelet customization - Windows pause image location** default value of `mcr.microsoft.com/k8s/core/pause:3.6` -configures <%= vars.product_short %> to pull the Windows pause image from the Microsoft Docker registry. +configures Tanzu Kubernetes Grid Integrated Edition to pull the Windows pause image from the Microsoft Docker registry.Note: Support for SecurityContextDeny admission controller has been removed in <%= vars.k8s_runtime_abbr %> v1.18. The SecurityContextDeny admission controller has been deprecated, and the Kubernetes community recommends the controller not be used. - Pod security admission (PSA) is the preferred method for providing a more secure Kubernetes environment. For more information about PSA, see Pod Security Admission in <%= vars.k8s_runtime_abbr %>. +
Note: Support for SecurityContextDeny admission controller has been removed in TKGI v1.18. The SecurityContextDeny admission controller has been deprecated, and the Kubernetes community recommends the controller not be used. + Pod security admission (PSA) is the preferred method for providing a more secure Kubernetes environment. For more information about PSA, see Pod Security Admission in TKGI.
<% end %> <% if current_page.data.windowsclusters != true %> diff --git a/_ports-protocols-sphere.html.md.erb b/_ports-protocols-sphere.html.md.erb index 34388c489..cbd3a1552 100644 --- a/_ports-protocols-sphere.html.md.erb +++ b/_ports-protocols-sphere.html.md.erb @@ -93,7 +93,7 @@ The following table lists ports and protocols used for network communication bet | vRealize Operations Manager | NSX API VIP | TCP | 443 | HTTPS| <% else %> <% end %> -| vRealize Operations Manager | <%= vars.k8s_runtime_abbr %> Controller | TCP | 8443 | HTTPSCA| +| vRealize Operations Manager | TKGI Controller | TCP | 8443 | HTTPSCA| | vRealize Operations Manager | Kubernetes Cluster API Server -LB VIP | TCP | 8443 | HTTPSCA| | Admin/Operator Console | vRealize LogInsight | TCP | 443 | HTTPS| | Kubernetes Cluster Ingress Controller | vRealize LogInsight | TCP | 9000 | ingestion api| @@ -105,11 +105,11 @@ The following table lists ports and protocols used for network communication bet | NSX Manager/Controller Node | vRealize LogInsight | TCP | 9000 | ingestion api| <% else %> <% end %> -| <%= vars.k8s_runtime_abbr %> Controller | vRealize LogInsight | TCP | 9000 | ingestion api| +| TKGI Controller | vRealize LogInsight | TCP | 9000 | ingestion api| | Admin/Operator and Developer Consoles | Wavefront SaaS APM | TCP | 443 | HTTPS| | kube-system pod/wavefront-proxy | Wavefront SaaS APM | TCP | 443 | HTTPS| | kube-system pod/wavefront-proxy | Wavefront SaaS APM | TCP | 8443 | HTTPSCA| -| pks-system pod/wavefront-collector | <%= vars.k8s_runtime_abbr %> Controller | TCP | 24224 | Fluentd out_forward| +| pks-system pod/wavefront-collector | TKGI Controller | TCP | 24224 | Fluentd out_forward| | Admin/Operator Console | vRealize Network Insight Platform | TCP | 443 | HTTPS| | Admin/Operator Console | vRealize Network Insight Proxy | TCP | 22 | SSH| | vRealize Network Insight Proxy | Kubernetes Cluster API Server -LB VIP | TCP | 8443 | HTTPSCA| @@ -118,5 +118,5 @@ The following table lists ports and protocols used for network communication bet | vRealize Network Insight Proxy | NSX API VIP | TCP | 443 | HTTPS| <% else %> <% end %> -| vRealize Network Insight Proxy | <%= vars.k8s_runtime_abbr %> Controller | TCP | 8443 | HTTPSCA| -| vRealize Network Insight Proxy | <%= vars.k8s_runtime_abbr %> Controller | TCP | 9021 | TKGI API server| +| vRealize Network Insight Proxy | TKGI Controller | TCP | 8443 | HTTPSCA| +| vRealize Network Insight Proxy | TKGI Controller | TCP | 9021 | TKGI API server| diff --git a/_ports-protocols.html.md.erb b/_ports-protocols.html.md.erb index e0c0283fe..1f2050f7a 100644 --- a/_ports-protocols.html.md.erb +++ b/_ports-protocols.html.md.erb @@ -1,25 +1,25 @@Note: The type:NodePort
Service type is not supported for <%= vars.k8s_runtime_abbr %> deployments on vSphere with NSX.
+
Note: The type:NodePort
Service type is not supported for TKGI deployments on vSphere with NSX.
Only type:LoadBalancer
and Services associated with Ingress rules are supported on vSphere with NSX.
Warning: High availability mode is a beta feature. Do not scale your TKGI API or TKGI Database to more than one instance in production environments.
<% if current_page.data.iaas == "Azure" %>Note: On Azure, you must reconfigure your - <%= vars.control_plane %> load balancer backend pool - whenever you modify your <%= vars.control_plane %> VM group. - For more information about configuring your <%= vars.control_plane %> + TKGI API load balancer backend pool + whenever you modify your TKGI API VM group. + For more information about configuring your TKGI API load balancer backend pool, see Create a Load Balancer in Configuring an Azure Load Balancer for the TKGI API. @@ -23,11 +23,11 @@ For each job, review the **Automatic** values in the following fields: Provisioning an NSX Load Balancer for the TKGI API Server.
<% end %> - * **VM TYPE**: By default, the **<%= vars.control_plane_db %>** and **<%= vars.control_plane %>** jobs are set to the same **Automatic** VM type. + * **VM TYPE**: By default, the **TKGI Database** and **TKGI API** jobs are set to the same **Automatic** VM type. If you want to adjust this value, we recommend that you select the same VM type for both jobs. -Note: The Automatic VM TYPE values match the recommended resource configuration for the <%= vars.control_plane %> - and <%= vars.control_plane_db %> jobs. +
Note: The Automatic VM TYPE values match the recommended resource configuration for the TKGI API + and TKGI Database jobs.
- * **PERSISTENT DISK TYPE**: By default, the **<%= vars.control_plane_db %>** and **<%= vars.control_plane %>** jobs are set to the same persistent disk type. + * **PERSISTENT DISK TYPE**: By default, the **TKGI Database** and **TKGI API** jobs are set to the same persistent disk type. If you want to adjust this value, you can change the persistent disk type for each of the jobs independently. Using the same persistent disk type for both jobs is not required. diff --git a/_saml-sso-login.html.md.erb b/_saml-sso-login.html.md.erb index 8176c5748..77d491b7b 100644 --- a/_saml-sso-login.html.md.erb +++ b/_saml-sso-login.html.md.erb @@ -1,4 +1,4 @@ -Note: If your operator has configured <%= vars.product_short %> to use a SAML identity provider, - you must include an additional SSO flag to use the above command. For information about the SSO flags, see the section for the above command in <%= vars.k8s_runtime_abbr %> CLI. For information about configuring SAML, - see Connecting <%= vars.product_short %> to a SAML Identity Provider +
Note: If your operator has configured Tanzu Kubernetes Grid Integrated Edition to use a SAML identity provider, + you must include an additional SSO flag to use the above command. For information about the SSO flags, see the section for the above command in TKGI CLI. For information about configuring SAML, + see Connecting Tanzu Kubernetes Grid Integrated Edition to a SAML Identity Provider
diff --git a/_scale-to-ha-upgrade.html.md.erb b/_scale-to-ha-upgrade.html.md.erb index 7008c5308..e571ab70b 100644 --- a/_scale-to-ha-upgrade.html.md.erb +++ b/_scale-to-ha-upgrade.html.md.erb @@ -2,22 +2,22 @@Note: On Azure, you must reconfigure your
- <%= vars.control_plane %> load balancer backend pool
- whenever you modify your <%= vars.control_plane %> VM group.
- For more information about configuring your <%= vars.control_plane %>
+ TKGI API load balancer backend pool
+ whenever you modify your TKGI API VM group.
+ For more information about configuring your TKGI API
load balancer backend pool, see
Create a Load Balancer
in Configuring an Azure Load Balancer for the TKGI API.
diff --git a/_share-endpoint.html.md.erb b/_share-endpoint.html.md.erb
index 0c4ac1b1c..a73825494 100644
--- a/_share-endpoint.html.md.erb
+++ b/_share-endpoint.html.md.erb
@@ -1,7 +1,7 @@
-You need to retrieve the <%= vars.control_plane %> endpoint to allow your organization to use the API to create, update, and delete Kubernetes clusters.
+You need to retrieve the TKGI API endpoint to allow your organization to use the API to create, update, and delete Kubernetes clusters.
-To retrieve the <%= vars.control_plane %> endpoint, do the following:
+To retrieve the TKGI API endpoint, do the following:
1. Navigate to the Ops Manager **Installation Dashboard**.
-1. Click the **<%= vars.product_tile %>** tile.
-1. Click the **Status** tab and locate the **<%= vars.control_plane %>** job. The IP address of the <%= vars.control_plane %> job is the <%= vars.control_plane %> endpoint.
+1. Click the **Tanzu Kubernetes Grid Integrated Edition** tile.
+1. Click the **Status** tab and locate the **TKGI API** job. The IP address of the TKGI API job is the TKGI API endpoint.
diff --git a/_tmc.html.md.erb b/_tmc.html.md.erb
index b931a8479..7eca0652c 100644
--- a/_tmc.html.md.erb
+++ b/_tmc.html.md.erb
@@ -1,17 +1,17 @@
<% if current_page.data.iaas != "GCP" %>
Tanzu Mission Control integration lets you monitor and manage
-<%= vars.product_tile %> clusters from the Tanzu Mission Control console,
+Tanzu Kubernetes Grid Integrated Edition clusters from the Tanzu Mission Control console,
which makes the Tanzu Mission Control console a single point of control
for all Kubernetes clusters. For more information about Tanzu Mission Control, see the VMware Tanzu Mission Control home page.
-To integrate <%= vars.product_short %> with Tanzu Mission Control:
+To integrate Tanzu Kubernetes Grid Integrated Edition with Tanzu Mission Control:
-1. Confirm that the <%= vars.control_plane %> VM has internet access and
+1. Confirm that the TKGI API VM has internet access and
can connect to `cna.tmc.cloud.vmware.com` and the other outbound URLs listed in
the [What Happens When You Attach a Cluster](https://docs.vmware.com/en/VMware-Tanzu-Mission-Control/services/tanzumc-concepts/GUID-147472ED-16BB-4AAA-9C35-A951C5ADA88A.html) section of the Tanzu Mission Control Product
documentation.
-1. Navigate to the **<%= vars.product_tile %>** tile > the **Tanzu Mission Control** pane and
+1. Navigate to the **Tanzu Kubernetes Grid Integrated Edition** tile > the **Tanzu Mission Control** pane and
select **Yes** under **Tanzu Mission Control Integration**.
@@ -37,15 +37,15 @@ select **Yes** under **Tanzu Mission Control Integration**.
For more information about role and access policy,
see [Access Control](https://docs.vmware.com/en/VMware-Tanzu-Mission-Control/services/tanzumc-concepts/GUID-EB9C6D83-1132-444F-8218-F264E43F25BD.html) in the VMware Tanzu Mission Control Product documentation.
- - **Tanzu Mission Control Cluster Name Prefix**: Enter a name prefix for identifying the <%= vars.product_short %> clusters in Tanzu Mission Control.
+ - **Tanzu Mission Control Cluster Name Prefix**: Enter a name prefix for identifying the Tanzu Kubernetes Grid Integrated Edition clusters in Tanzu Mission Control.
1. Click **Save**.
-
Warning: After the <%= vars.product_tile %> tile is deployed with a configured cluster group, the cluster group cannot be updated.
+Warning: After the Tanzu Kubernetes Grid Integrated Edition tile is deployed with a configured cluster group, the cluster group cannot be updated.
Note: When you upgrade your Kubernetes clusters and have Tanzu Mission Control integration enabled, existing clusters will be attached to Tanzu Mission Control.
<% else %> -<%= vars.product_short %> does not support Tanzu Mission Control integration on GCP. +Tanzu Kubernetes Grid Integrated Edition does not support Tanzu Mission Control integration on GCP. Skip this configuration pane. <% end %> diff --git a/_uaa-admin-login.html.md.erb b/_uaa-admin-login.html.md.erb index 8913f8093..82658a5c7 100644 --- a/_uaa-admin-login.html.md.erb +++ b/_uaa-admin-login.html.md.erb @@ -1,8 +1,8 @@ -Before creating <%= vars.k8s_runtime_abbr %> users, you must log in to the UAA server as a UAA admin. To log in to the UAA server, do the following: +Before creating TKGI users, you must log in to the UAA server as a UAA admin. To log in to the UAA server, do the following: 1. Retrieve the UAA management admin client secret: - 1. In a web browser, navigate to the Ops Manager **Installation Dashboard** and click the **<%= vars.product_tile %>** tile. + 1. In a web browser, navigate to the Ops Manager **Installation Dashboard** and click the **Tanzu Kubernetes Grid Integrated Edition** tile. 1. Click the **Credentials** tab. @@ -16,8 +16,8 @@ Before creating <%= vars.k8s_runtime_abbr %> users, you must log in to the UAA s Where: - * `TKGI-API` is the domain name of your <%= vars.control_plane %> server. You entered this domain name in the **<%= vars.product_tile %>** tile > **<%= vars.control_plane %>** > **API Hostname (FQDN)**. - * `CERTIFICATE-PATH` is the path to your Ops Manager root CA certificate. Provide this certificate to validate the <%= vars.control_plane %> certificate with SSL. + * `TKGI-API` is the domain name of your TKGI API server. You entered this domain name in the **Tanzu Kubernetes Grid Integrated Edition** tile > **TKGI API** > **API Hostname (FQDN)**. + * `CERTIFICATE-PATH` is the path to your Ops Manager root CA certificate. Provide this certificate to validate the TKGI API certificate with SSL. * If you are logged in to the Ops Manager VM, specify `/var/tempest/workspaces/default/root_ca_certificate` as the path. This is the default location of the root certificate on the Ops Manager VM. * If you downloaded the Ops Manager root CA certificate to your machine, specify the path where you stored the certificate. diff --git a/_uaa-scopes.html.md.erb b/_uaa-scopes.html.md.erb index b61f58fdb..e7951ec24 100644 --- a/_uaa-scopes.html.md.erb +++ b/_uaa-scopes.html.md.erb @@ -1,6 +1,6 @@ -By assigning UAA scopes, you grant users the ability to create, manage, and audit Kubernetes clusters in <%= vars.product_short %>. +By assigning UAA scopes, you grant users the ability to create, manage, and audit Kubernetes clusters in Tanzu Kubernetes Grid Integrated Edition. -A UAA admin user can assign the following UAA scopes to <%= vars.product_short %> users: +A UAA admin user can assign the following UAA scopes to Tanzu Kubernetes Grid Integrated Edition users: * `pks.clusters.admin`: Accounts with this scope can create and access all clusters. * `pks.clusters.manage`: Accounts with this scope can create and access their own clusters. diff --git a/_uaa.html.md.erb b/_uaa.html.md.erb index 20f6ebb2d..d3c56dcf9 100644 --- a/_uaa.html.md.erb +++ b/_uaa.html.md.erb @@ -1,16 +1,16 @@ To configure the UAA server: 1. Click **UAA**. -1. Under **<%= vars.control_plane %> Access Token Lifetime**, enter a time in seconds for the -<%= vars.control_plane %> access token lifetime. This field defaults to `600`. +1. Under **TKGI API Access Token Lifetime**, enter a time in seconds for the +TKGI API access token lifetime. This field defaults to `600`.Note: <%= vars.recommended_by %> recommends using the default UAA token timeout values. @@ -19,10 +19,10 @@ after six hours.
1. Under **Configure created clusters to use UAA as the OIDC provider**, select **Enabled** or **Disabled**. This is a global default setting for -<%= vars.k8s_runtime_abbr %>-provisioned clusters. For more information, see +TKGI-provisioned clusters. For more information, see [OIDC Provider for Kubernetes Clusters](oidc-provider.html).Warning: <%= vars.recommended_by %> recommends adding OIDC prefixes to prevent users and groups from gaining unintended cluster privileges. If you change the above values for a - pre-existing <%=vars.product_short %> installation, you must change any + pre-existing Tanzu Kubernetes Grid Integrated Edition installation, you must change any existing role bindings that bind to a user name or group. If you do not change your role bindings, developers cannot access Kubernetes clusters. For instructions, see Managing Cluster Access and Permissions.
1. (Optional) For **TKGI cluster client redirect URIs**, enter one or more comma-delimited UAA redirect URIs. Configure **TKGI cluster client redirect URIs** to assign persistent UAA `cluster_client` `redirect_uri` URIs to your clusters. -UAA redirect URIs configured in the **TKGI cluster client redirect URIs** field persist through cluster updates and <%= vars.k8s_runtime_abbr %> upgrades. +UAA redirect URIs configured in the **TKGI cluster client redirect URIs** field persist through cluster updates and TKGI upgrades. 1. Select one of the following options: * To use an internal user account store for UAA, select **Internal UAA**. Click **Save** and continue to [(Optional) Host Monitoring](#syslog). * To use LDAP for UAA, select **LDAP Server** and continue to - [Connecting <%= vars.product_short %> to an LDAP Server](configuring-ldap.html). + [Connecting Tanzu Kubernetes Grid Integrated Edition to an LDAP Server](configuring-ldap.html). * To use SAML for UAA, select **SAML Identity Provider** and continue to - [Connecting <%= vars.product_short %> to a SAML Identity Provider](configuring-saml.html). + [Connecting Tanzu Kubernetes Grid Integrated Edition to a SAML Identity Provider](configuring-saml.html). diff --git a/_usage-data.html.md.erb b/_usage-data.html.md.erb index 8ac0aa650..b01df877f 100644 --- a/_usage-data.html.md.erb +++ b/_usage-data.html.md.erb @@ -1,8 +1,8 @@ -<%= vars.product_short %>-provisioned clusters send usage data to the <%= vars.k8s_runtime_abbr %> control plane for storage. +Tanzu Kubernetes Grid Integrated Edition-provisioned clusters send usage data to the TKGI control plane for storage. The VMware Customer Experience Improvement Program (CEIP) provides the option to also send the cluster usage data to VMware to improve customer experience. -To configure <%= vars.product_short %> CEIP Program settings: +To configure Tanzu Kubernetes Grid Integrated Edition CEIP Program settings: 1. Click **CEIP**. 1. Review the information about the CEIP. @@ -18,7 +18,7 @@ To configure <%= vars.product_short %> CEIP Program settings: * (Optional) Enter your entitlement account number or Tanzu customer number. If you are a VMware customer, you can find your entitlement account number in your **Account Summary** on [my.vmware.com](https://my.vmware.com). If you are a Pivotal customer, you can find your Pivotal Customer Number in your Pivotal Order Confirmation email. - * (Optional) Enter a descriptive name for your <%= vars.k8s_runtime_abbr %> installation. + * (Optional) Enter a descriptive name for your TKGI installation. The label you assign to this installation will be used in CEIP reports to identify the environment. 1. To provide information about the purpose for this installation, select an option.  diff --git a/_vrealize-logs.html.md.erb b/_vrealize-logs.html.md.erb index f16cb1738..95db6ca4e 100644 --- a/_vrealize-logs.html.md.erb +++ b/_vrealize-logs.html.md.erb @@ -21,4 +21,4 @@ The default value `0` means that the rate is not limited, which suffices for man A large number might result in dropping too many log entries. 1. Click **Save**. These settings apply to any clusters created after you have saved these configuration settings and clicked **Apply Changes**. If the **Upgrade all clusters errand** has been enabled, these settings are also applied to existing clusters. -Note: The <%= vars.product_tile %> tile does not validate your vRLI configuration settings. To verify your setup, look for log entries in vRLI.
+Note: The Tanzu Kubernetes Grid Integrated Edition tile does not validate your vRLI configuration settings. To verify your setup, look for log entries in vRLI.
diff --git a/_vsphere_versions.html.md.erb b/_vsphere_versions.html.md.erb index 20ddf8032..4aa8906fd 100644 --- a/_vsphere_versions.html.md.erb +++ b/_vsphere_versions.html.md.erb @@ -1,2 +1,2 @@ -For <%= vars.product_short %> on vSphere version requirements, refer to the VMware Product Interoperability Matrices. +For Tanzu Kubernetes Grid Integrated Edition on vSphere version requirements, refer to the VMware Product Interoperability Matrices. diff --git a/about-lb.html.md.erb b/about-lb.html.md.erb index 3c87fa9cf..68d2bd8bd 100644 --- a/about-lb.html.md.erb +++ b/about-lb.html.md.erb @@ -3,31 +3,31 @@ title: Load Balancers in Tanzu Kubernetes Grid Integrated Edition owner: TKGI --- -This topic describes the <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) load balancers for the <%= vars.control_plane %> and <%= vars.k8s_runtime_abbr %> clusters and workloads. +This topic describes the VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) load balancers for the TKGI API and TKGI clusters and workloads.Note: The NodePort
Service type is not supported for <%= vars.product_short %> deployments on vSphere with NSX. Only type:LoadBalancer
Services and Services associated with Ingress rules are supported on vSphere with NSX.
Note: The NodePort
Service type is not supported for Tanzu Kubernetes Grid Integrated Edition deployments on vSphere with NSX. Only type:LoadBalancer
Services and Services associated with Ingress rules are supported on vSphere with NSX.
Note: Support for SecurityContextDeny admission controller has been removed in <%= vars.k8s_runtime_abbr %> v1.18. The SecurityContextDeny admission controller has been deprecated, and the Kubernetes community recommends the controller not be used. - Pod security admission (PSA) is the preferred method for providing a more secure Kubernetes environment. For more information about PSA, see Pod Security Admission in <%= vars.k8s_runtime_abbr %>. +* [Enabling the PodSecurityAdmission Plugin for Tanzu Kubernetes Grid Integrated Edition Clusters and Using Pod Security Admission](./pod-security-admission.html) +* [Enabling the SecurityContextDeny Admission Plugin for Tanzu Kubernetes Grid Integrated Edition Clusters](./security-context-deny.html) +
Note: Support for SecurityContextDeny admission controller has been removed in TKGI v1.18. The SecurityContextDeny admission controller has been deprecated, and the Kubernetes community recommends the controller not be used. + Pod security admission (PSA) is the preferred method for providing a more secure Kubernetes environment. For more information about PSA, see Pod Security Admission in TKGI.
To deactivate an admission control plugin, see: -* [Deactivating Admission Control Plugins for <%= vars.product_short %> Clusters](./admission-plugins-disable.html) +* [Deactivating Admission Control Plugins for Tanzu Kubernetes Grid Integrated Edition Clusters](./admission-plugins-disable.html) diff --git a/api-auth.html.md.erb b/api-auth.html.md.erb index 1ae46a09b..6f3cd8c2d 100644 --- a/api-auth.html.md.erb +++ b/api-auth.html.md.erb @@ -3,29 +3,29 @@ title: TKGI API Authentication owner: TKGI --- -This topic describes how the <%= vars.product_full %> API (<%= vars.control_plane %>) works with User Account and Authentication (UAA) to manage <%= vars.k8s_runtime_abbr %> deployment authentication and authorization. +This topic describes how the VMware Tanzu Kubernetes Grid Integrated Edition API (TKGI API) works with User Account and Authentication (UAA) to manage TKGI deployment authentication and authorization. -## Authentication of <%= vars.control_plane %> Requests +## Authentication of TKGI API Requests -Before users can log in and use the <%= vars.k8s_runtime_abbr %> CLI, you must configure <%= vars.control_plane %> access with UAA. For more information, -see [Managing Tanzu Kubernetes Grid Integrated Edition Users with UAA](manage-users.html) and [Logging in to <%= vars.product_short %>](login.html). +Before users can log in and use the TKGI CLI, you must configure TKGI API access with UAA. For more information, +see [Managing Tanzu Kubernetes Grid Integrated Edition Users with UAA](manage-users.html) and [Logging in to Tanzu Kubernetes Grid Integrated Edition](login.html). You use the UAA Command Line Interface (UAAC) to target the UAA server and request an access token for the UAA admin user. If your request is successful, the UAA server returns the access token. -The UAA admin access token authorizes you to make requests to the <%= vars.control_plane %> using the <%= vars.k8s_runtime_abbr %> CLI and grant cluster access to new or existing users. +The UAA admin access token authorizes you to make requests to the TKGI API using the TKGI CLI and grant cluster access to new or existing users. -When a user with cluster access logs in to the <%= vars.k8s_runtime_abbr %> CLI, the CLI requests an access token for the user from the UAA server. -If the request is successful, the UAA server returns an access token to the <%= vars.k8s_runtime_abbr %> CLI. -When the user runs <%= vars.k8s_runtime_abbr %> CLI commands, for example, `tkgi clusters`, the CLI sends the request to the <%= vars.control_plane %> server and includes the user's UAA token. +When a user with cluster access logs in to the TKGI CLI, the CLI requests an access token for the user from the UAA server. +If the request is successful, the UAA server returns an access token to the TKGI CLI. +When the user runs TKGI CLI commands, for example, `tkgi clusters`, the CLI sends the request to the TKGI API server and includes the user's UAA token. -The <%= vars.control_plane %> sends a request to the UAA server to validate the user's token. -If the UAA server confirms that the token is valid, the <%= vars.control_plane %> uses the cluster information from the <%= vars.k8s_runtime_abbr %> broker to respond to the request. +The TKGI API sends a request to the UAA server to validate the user's token. +If the UAA server confirms that the token is valid, the TKGI API uses the cluster information from the TKGI broker to respond to the request. For example, if the user runs `tkgi clusters`, the CLI returns a list of the clusters that the user is authorized to manage. -##Routing to the <%= vars.control_plane %> VM +##Routing to the TKGI API VM -The <%= vars.control_plane %> server and the UAA server use different port numbers on the API VM. -For example, if your <%= vars.control_plane %> domain is `api.tkgi.example.com`, you can reach your <%= vars.control_plane %> and UAA servers at the following URLs: +The TKGI API server and the UAA server use different port numbers on the API VM. +For example, if your TKGI API domain is `api.tkgi.example.com`, you can reach your TKGI API and UAA servers at the following URLs:URL | ||
---|---|---|
<%= vars.control_plane %> | +TKGI API | api.tkgi.example.com:9021 |
Note: If Kubernetes control plane node VMs are recreated for any reason, you must reconfigure your -AWS <%= vars.k8s_runtime_abbr %> cluster load balancers to point to the new control plane VMs.
+AWS TKGI cluster load balancers to point to the new control plane VMs. ## Prerequisite -The version of the <%= vars.k8s_runtime_abbr %> CLI you are using must match the version of the <%= vars.product_tile %> tile that you are installing. +The version of the TKGI CLI you are using must match the version of the Tanzu Kubernetes Grid Integrated Edition tile that you are installing. -Note: Modify the example commands in this procedure to match the details of your <%= vars.product_short %> installation.
+Note: Modify the example commands in this procedure to match the details of your Tanzu Kubernetes Grid Integrated Edition installation.
## Configure AWS Load Balancer diff --git a/aws-configure-users.html.md.erb b/aws-configure-users.html.md.erb index 20f75e90b..69d22dc9b 100644 --- a/aws-configure-users.html.md.erb +++ b/aws-configure-users.html.md.erb @@ -4,33 +4,33 @@ owner: TKGI iaas: AWS --- -This topic describes how to create <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) admin users with User Account and Authentication (UAA). +This topic describes how to create VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) admin users with User Account and Authentication (UAA). ## Overview -UAA is the identity management service for <%= vars.k8s_runtime_abbr %>. -You must use UAA to create an admin user during your initial set up of <%= vars.k8s_runtime_abbr %>. +UAA is the identity management service for TKGI. +You must use UAA to create an admin user during your initial set up of TKGI. -<%= vars.k8s_runtime_abbr %> includes a UAA server, hosted on the <%= vars.control_plane %> VM. -Use the UAA Command Line Interface (UAAC) from the <%= vars.ops_manager_full %> (<%= vars.ops_manager %>) VM to interact with the <%= vars.k8s_runtime_abbr %> UAA server. +TKGI includes a UAA server, hosted on the TKGI API VM. +Use the UAA Command Line Interface (UAAC) from the VMware Tanzu Operations Manager (Ops Manager) VM to interact with the TKGI UAA server. You can also install UAAC on a workstation and run UAAC commands from there. ## Prerequisites -Before setting up admin users for <%= vars.product_short %>, you must have one of the following: +Before setting up admin users for Tanzu Kubernetes Grid Integrated Edition, you must have one of the following: * SSH access to the Ops Manager VM -* A machine that can connect to your <%= vars.control_plane %> VM +* A machine that can connect to your TKGI API VM -## Step 1: Connect to the <%= vars.control_plane %> VM +## Step 1: Connect to the TKGI API VM -You can connect to the <%= vars.control_plane %> VM from the Ops Manager VM or from a different machine such as your local workstation. +You can connect to the TKGI API VM from the Ops Manager VM or from a different machine such as your local workstation. ### Option 1: Connect through the Ops Manager VM -You can connect to the <%= vars.control_plane %> VM by logging in to the Ops Manager VM through SSH. +You can connect to the TKGI API VM by logging in to the Ops Manager VM through SSH. To SSH into the Ops Manager VM on AWS, do the following: 1. Retrieve the key pair you used when you @@ -62,7 +62,7 @@ created the Ops Manager VM. To see the name of the key pair: ### Option 2: Connect through a Non-Ops Manager Machine -To connect to the <%= vars.control_plane %> VM and run UAA commands, do the following: +To connect to the TKGI API VM and run UAA commands, do the following: 1. Install UAAC on your machine. For example: @@ -83,25 +83,25 @@ To connect to the <%= vars.control_plane %> VM and run UAA commands, do the foll <%= partial 'uaa-admin-login' %> -##Step 3: Assign <%= vars.product_short %> Cluster Scopes +##Step 3: Assign Tanzu Kubernetes Grid Integrated Edition Cluster Scopes The `pks.clusters.manage` and `pks.clusters.admin` UAA scopes grant users the ability -to create and manage Kubernetes clusters in <%= vars.product_short %>. -For information about UAA scopes in <%= vars.product_short %>, see -[UAA Scopes for <%= vars.product_short %> Users](uaa-scopes.html). +to create and manage Kubernetes clusters in Tanzu Kubernetes Grid Integrated Edition. +For information about UAA scopes in Tanzu Kubernetes Grid Integrated Edition, see +[UAA Scopes for Tanzu Kubernetes Grid Integrated Edition Users](uaa-scopes.html). -To create <%= vars.product_short %> users with the `pks.clusters.manage` or `pks.clusters.admin` UAA scope, +To create Tanzu Kubernetes Grid Integrated Edition users with the `pks.clusters.manage` or `pks.clusters.admin` UAA scope, perform one or more of the following procedures based on the needs of your deployment: -* To assign <%= vars.k8s_runtime_abbr %> cluster scopes to an individual user, see -[Grant <%= vars.product_short %> Access to an Individual User](manage-users.html#uaa-user). - Follow this procedure if you selected **Internal UAA** when you configured **UAA** in the <%= vars.product_short %> tile. For more information, see [Installing <%= vars.product_short %> on AWS](installing-aws.html#uaa). -* To assign <%= vars.k8s_runtime_abbr %> cluster scopes to an LDAP group, see [Grant <%= vars.product_short %> Access to an External LDAP Group](manage-users.html#external-group). Follow this procedure if you selected **LDAP Server** when you configured **UAA** in the <%= vars.product_short %> tile. For more information, see [Installing <%= vars.product_short %> <%= vars.k8s_runtime_abbr %> on AWS](installing-aws.html#uaa). -* To assign <%= vars.k8s_runtime_abbr %> cluster scopes to a SAML group, see [Grant <%= vars.product_short %> Access to an External SAML Group](manage-users.html#saml). Follow this procedure if you selected **SAML Identity Provider** when you configured **UAA** in the <%= vars.product_short %> tile. For more information, see [Installing <%= vars.product_short %> <%= vars.k8s_runtime_abbr %> on AWS](installing-aws.html#uaa). -* To assign <%= vars.k8s_runtime_abbr %> cluster scopes to a client, see [Grant <%= vars.product_short %> Access to a Client](manage-users.html#uaa-client). +* To assign TKGI cluster scopes to an individual user, see +[Grant Tanzu Kubernetes Grid Integrated Edition Access to an Individual User](manage-users.html#uaa-user). + Follow this procedure if you selected **Internal UAA** when you configured **UAA** in the Tanzu Kubernetes Grid Integrated Edition tile. For more information, see [Installing Tanzu Kubernetes Grid Integrated Edition on AWS](installing-aws.html#uaa). +* To assign TKGI cluster scopes to an LDAP group, see [Grant Tanzu Kubernetes Grid Integrated Edition Access to an External LDAP Group](manage-users.html#external-group). Follow this procedure if you selected **LDAP Server** when you configured **UAA** in the Tanzu Kubernetes Grid Integrated Edition tile. For more information, see [Installing Tanzu Kubernetes Grid Integrated Edition TKGI on AWS](installing-aws.html#uaa). +* To assign TKGI cluster scopes to a SAML group, see [Grant Tanzu Kubernetes Grid Integrated Edition Access to an External SAML Group](manage-users.html#saml). Follow this procedure if you selected **SAML Identity Provider** when you configured **UAA** in the Tanzu Kubernetes Grid Integrated Edition tile. For more information, see [Installing Tanzu Kubernetes Grid Integrated Edition TKGI on AWS](installing-aws.html#uaa). +* To assign TKGI cluster scopes to a client, see [Grant Tanzu Kubernetes Grid Integrated Edition Access to a Client](manage-users.html#uaa-client). ## Next Step -After you create admin users in <%= vars.product_short %>, the admin users can create and manage -Kubernetes clusters in <%= vars.product_short %>. +After you create admin users in Tanzu Kubernetes Grid Integrated Edition, the admin users can create and manage +Kubernetes clusters in Tanzu Kubernetes Grid Integrated Edition. For more information, see [Managing Kubernetes Clusters and Workloads](managing-clusters.html). diff --git a/aws-index.html.md.erb b/aws-index.html.md.erb index b99f17027..cbbfebdaa 100644 --- a/aws-index.html.md.erb +++ b/aws-index.html.md.erb @@ -4,11 +4,11 @@ owner: Ops Manager iaas: AWS --- -The topics below describe how to install <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) on Amazon Web Services (AWS). +The topics below describe how to install VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) on Amazon Web Services (AWS). -## Install <%= vars.product_short %> on AWS +## Install Tanzu Kubernetes Grid Integrated Edition on AWS -To install <%= vars.product_short %> on AWS, follow the instructions below: +To install Tanzu Kubernetes Grid Integrated Edition on AWS, follow the instructions below:1 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
<%= vars.control_plane %> | +TKGI API | m4.large | 1 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
<%= vars.control_plane_db %> | +TKGI Database | m4.large | 1 |
120 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
<%= vars.control_plane %> | +TKGI API | 2 | 8 | 64 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
<%= vars.control_plane_db %> | +TKGI Database | 2 | 8 | 64 | @@ -62,7 +62,7 @@ Installing Ops Manager and <%= vars.product_short %> requires the following virt
pxc_server_ca and leaf certificates |
Four years | -See Rotate <%= vars.k8s_runtime_abbr %> Control Plane Certificates + | See Rotate TKGI Control Plane Certificates or How to rotate TKGI control plane CA and leaf certificates in the VMware Tanzu Knowledge Base. |
kubo_odb_ca_2018 and leaf certificates |
pks_tls |
Admin-defined | -Open the <%= vars.k8s_runtime_abbr %> API tab on the <%= vars.k8s_runtime_abbr %> tile.
- The <%= vars.k8s_runtime_abbr %> API Service certificate is used to secure access to the <%= vars.k8s_runtime_abbr %> API endpoint. + | Open the TKGI API tab on the TKGI tile.
+ The TKGI API Service certificate is used to secure access to the TKGI API endpoint. |
cpi |
String | -BOSH CPI ID of your <%= vars.k8s_runtime_abbr %> deployment. For example, + | BOSH CPI ID of your TKGI deployment. For example,
abc012abc345abc567de . For instructions on how to obtain the ID, see Retrieve the BOSH CPI ID. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Networking: VMware NSX (Bring Your Own Topology) | -
All other options. |
|
160 | ||||
<%= vars.control_plane %> | +TKGI API | 2 | 8 | 64 |
<%= vars.control_plane_db %> | +TKGI Database | 2 | 8 | 64 |
NOTE: VMware recommends deploying <%= vars.k8s_runtime_abbr %> on its own dedicated <%= vars.ops_manager %> instance, rather than on a shared <%= vars.ops_manager %> that also hosts other runtimes such as Tanzu Application Service.
+NOTE: VMware recommends deploying TKGI on its own dedicated Ops Manager instance, rather than on a shared Ops Manager that also hosts other runtimes such as Tanzu Application Service.
<%= partial '_increase_persistent_disk' %>Note: Support for GCP is deprecated and will be entirely removed in <%= vars.k8s_runtime_abbr %> v1.19.
+Note: Support for GCP is deprecated and will be entirely removed in TKGI v1.19.
Note: Once a cluster has been created or updated to use AD authentication, you cannot update it to stop using AD authentication.
@@ -20,7 +20,7 @@ For information about gMSAs see [Group Managed Service Accounts Overview](https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview) in the Microsoft Windows Server documentation. -To manage AD integration with a <%= vars.k8s_runtime_abbr %>-provisioned Windows worker-based Kubernetes cluster: +To manage AD integration with a TKGI-provisioned Windows worker-based Kubernetes cluster: * [Create and Integrate a Cluster with AD Authentication](#create) * [Change a Cluster's Active Directory Authentication](#change-ad) @@ -42,7 +42,7 @@ To use AD to control access to Windows worker-based Kubernetes clusters, you neeNote: Once a cluster has been created or updated to use AD authentication, you cannot update it to stop using AD authentication.
@@ -83,7 +83,7 @@ in _Release Notes_ for additional requirements. * `CONFIG-FILE-NAME` is the path and filename of the configuration file you want to apply to the cluster. For information about GMSA command line configuration, see [GMSA Configuration Settings](#settings) below. -WARNING: Update the configuration file only on a <%= vars.k8s_runtime_abbr %> cluster that has been upgraded to the current <%= vars.k8s_runtime_abbr %> version. For more information, see Tasks Supported Following a <%= vars.k8s_runtime_abbr %> Control Plane Upgrade in About <%= vars.product_short %> Upgrades. +
WARNING: Update the configuration file only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
1. Integrate the cluster with the AD gMSA as described in [Integrate the Cluster with Active Directory](#integrate), below. diff --git a/harbor.html.md.erb b/harbor.html.md.erb index 935668780..534420a90 100644 --- a/harbor.html.md.erb +++ b/harbor.html.md.erb @@ -3,28 +3,28 @@ title: Getting Started with VMware Harbor Registry owner: TKGI --- -This topic describes how to set up the VMware Harbor Registry (Harbor) image registry for <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>). +This topic describes how to set up the VMware Harbor Registry (Harbor) image registry for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI).Note: When running on worker nodes, the monitoring components and integrations are visible to -both <%= vars.k8s_runtime_abbr %> admins and cluster users, such as developers. +both TKGI admins and cluster users, such as developers.
Prometheus DNS-based Service Discovery | External integration |
- <%= vars.k8s_runtime_abbr %> supports DNS-based service discovery for <%= vars.k8s_runtime_abbr %>-provisioned Linux cluster nodes:
+ TKGI supports DNS-based service discovery for TKGI-provisioned Linux cluster nodes:
master.cfcr.internal and master‑0.etcd.cfcr.internal
for control plane nodes and
worker.cfcr.internal for worker nodes.@@ -84,7 +84,7 @@ The following components and integrations can be used to monitor Kubernetes work | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Sink resources | -<%= vars.k8s_runtime_abbr %> component | +TKGI component | See Sink Resources, below. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Feature | Included in K8s | -Included in <%= vars.product_short %> | +Included in Tanzu Kubernetes Grid Integrated Edition | |||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Single tenant ingress | @@ -99,7 +99,7 @@ The following table details the features that <%= vars.product_short %> adds to||||||||||||||||||||||||||||||||||||
UserAuthenticationSuccess | ||||||||||||||||||||||||||||||||||||
Description | -A user has successfully logged into <%= vars.product_short %>. | +A user has successfully logged into Tanzu Kubernetes Grid Integrated Edition. | ||||||||||||||||||||||||||||||||||
Identifying String | @@ -116,7 +116,7 @@ actions taken by a user logged into the <%= vars.k8s_runtime_abbr %> CLI.||||||||||||||||||||||||||||||||||||
UserAuthenticationFailure | ||||||||||||||||||||||||||||||||||||
Description | -A user has failed a login attempt into <%= vars.product_short %>. | +A user has failed a login attempt into Tanzu Kubernetes Grid Integrated Edition. | ||||||||||||||||||||||||||||||||||
Identifying String | @@ -140,7 +140,7 @@ actions taken by a user logged into the <%= vars.k8s_runtime_abbr %> CLI.||||||||||||||||||||||||||||||||||||
ClientAuthenticationSuccess | ||||||||||||||||||||||||||||||||||||
Description | -A user has successfully gained access to a cluster in <%= vars.product_short %>. | +A user has successfully gained access to a cluster in Tanzu Kubernetes Grid Integrated Edition. | ||||||||||||||||||||||||||||||||||
Identifying String | @@ -164,7 +164,7 @@ actions taken by a user logged into the <%= vars.k8s_runtime_abbr %> CLI.||||||||||||||||||||||||||||||||||||
UserCreatedEvent | ||||||||||||||||||||||||||||||||||||
Description | -An administrator has successfully created a new user for <%= vars.product_short %>. | +An administrator has successfully created a new user for Tanzu Kubernetes Grid Integrated Edition. | ||||||||||||||||||||||||||||||||||
Identifying String | @@ -186,7 +186,7 @@ actions taken by a user logged into the <%= vars.k8s_runtime_abbr %> CLI.||||||||||||||||||||||||||||||||||||
UserDeletedEvent | ||||||||||||||||||||||||||||||||||||
Description | -An administrator has successfully deleted a user for <%= vars.product_short %>. | +An administrator has successfully deleted a user for Tanzu Kubernetes Grid Integrated Edition. | ||||||||||||||||||||||||||||||||||
Identifying String | @@ -208,8 +208,8 @@ actions taken by a user logged into the <%= vars.k8s_runtime_abbr %> CLI.||||||||||||||||||||||||||||||||||||
Telemetry Ping | ||||||||||||||||||||||||||||||||||||
Description | -The optional telemetry system has successfully reached an external host for collecting product data for <%= vars.product_short %>. - To learn more about the <%= vars.product_short %> telemetry program, see Telemetry. |
+ The optional telemetry system has successfully reached an external host for collecting product data for Tanzu Kubernetes Grid Integrated Edition. + To learn more about the Tanzu Kubernetes Grid Integrated Edition telemetry program, see Telemetry. |
||||||||||||||||||||||||||||||||||
Identifying String | @@ -272,6 +272,6 @@ Event Log format see the [Kubernetes documentation](https://kubernetes.io/docs/t ## Related Links -* For information about configuring syslog log transport, see [Installing <%= vars.product_short %>](./installing.html). -* For information about downloading <%= vars.k8s_runtime_abbr %> logs, see [Downloading Logs from VMs](./download-logs.html). +* For information about configuring syslog log transport, see [Installing Tanzu Kubernetes Grid Integrated Edition](./installing.html). +* For information about downloading TKGI logs, see [Downloading Logs from VMs](./download-logs.html). * For information about Kubernetes Audit Log format, see [Kubernetes documentation](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/) diff --git a/monitor-etcd.html.md.erb b/monitor-etcd.html.md.erb index 2abf3682c..57cd58a99 100644 --- a/monitor-etcd.html.md.erb +++ b/monitor-etcd.html.md.erb @@ -4,7 +4,7 @@ owner: TKGI --- This topic describes how to configure Telegraf in -<%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>). +VMware Tanzu Kubernetes Grid Integrated Edition (TKGI). ## Overview @@ -14,8 +14,8 @@ such as Wavefront or Datadog. For more information about collected metrics, see [Metrics: Telegraf](host-monitoring.html#telegraf) in -_Monitoring <%= vars.k8s_runtime_abbr %> and -<%= vars.k8s_runtime_abbr %>-Provisioned Clusters_. +_Monitoring TKGI and +TKGI-Provisioned Clusters_. ## Collect Metrics Using Telegraf @@ -23,12 +23,12 @@ To collect metrics using Telegraf: 1. Create a configuration file for your output plugin. See [Create a Configuration File](#toml) below. -1. Configure Telegraf in the <%= vars.product_tile %> tile. +1. Configure Telegraf in the Tanzu Kubernetes Grid Integrated Edition tile. See [Configure Telegraf in the Tile](#connect) below. ### Create a Configuration File -To connect a monitoring service to <%= vars.k8s_runtime_abbr %>, you must create a configuration file for the service. The configuration file is written in a TOML format and consists of key-value pairs. After you create your configuration file, you can enter the file into the <%= vars.product_tile %> tile to connect the service. +To connect a monitoring service to TKGI, you must create a configuration file for the service. The configuration file is written in a TOML format and consists of key-value pairs. After you create your configuration file, you can enter the file into the Tanzu Kubernetes Grid Integrated Edition tile to connect the service. To create a configuration file for your monitoring service: @@ -51,9 +51,9 @@ For example, if you want to create a configuration file for an HTTP output plugi ### Configure Telegraf in the Tile -To configure <%= vars.k8s_runtime_abbr %> to use Telegraf for metric collection: +To configure TKGI to use Telegraf for metric collection: -1. Navigate to the **<%= vars.product_tile %>** tile > **Settings** > **Host Monitoring**. +1. Navigate to the **Tanzu Kubernetes Grid Integrated Edition** tile > **Settings** > **Host Monitoring**. 1. Under **Enable Telegraf Outputs?**, select **Yes**.  @@ -162,10 +162,10 @@ To configure <%= vars.k8s_runtime_abbr %> to use Telegraf for metric collection:
Note: - The Telegraf output configuration options are visible to <%= vars.k8s_runtime_abbr %> admins only. + The Telegraf output configuration options are visible to TKGI admins only.
- Components you enable in this step will be visible to <%= vars.k8s_runtime_abbr %> admins only. + Components you enable in this step will be visible to TKGI admins only. 1. In **Setup Telegraf Outputs**, replace the default value `[[outputs.discard]]` with the contents of the configuration file that you created in [Create a Configuration File](#toml) above. @@ -180,14 +180,6 @@ See the following example for an HTTP output plugin: [processors.override.tags] director = "bosh-director-1" ``` -<% if vars.product_version == "COMMENTED" %> -Note:
- If you use the Prometheus Output plugin, your Prometheus Client must be configured with metric_version=2
.
- For Telegraf Prometheus Output plugin configuration information, see
- Configuration
- in the Telegraf GitHub repository.
-
Position | -Metric Version 1 | -Metric Version 2 | -
---|---|---|
1 | -rest_client_requests_total | -prometheus | -
2 | -cluster_name | -cluster_name | -
3 | -code | -code | -
4 | -host | -host | -
5 | -internal_ip | -internal_ip | -
6 | -method | -method | -
7 | -url | -url | -
8 | -counter | -rest_client_requests_total | -
Metric Version | -Example Output | -
---|---|
Metric Version 1 | -
- rest_client_requests_total,cluster_name=aaa,code=200,host=master.cfcr.internal:8443,internal_ip=30.0.0.9,
- method=GET,url=https://localhost:10259/metrics counter=313910 1660534610000000000
- |
-
Metric Version 2 | -
- prometheus,cluster_name=aaa,code=200,host=master.cfcr.internal:8443,internal_ip=30.0.0.9,method=GET,
- url=https://localhost:10259/metrics rest_client_requests_total=314705 1660536100000000000
- |
-
Position | -Memory Metrics | -Agent Metrics | -Write Metrics | -Scheduler Metrics | -API Server Metrics | -Internal Metrics | -
---|---|---|---|---|---|---|
1 | -internal_memstats | -internal_agent | -internal_write | -internal_gather | -internal_gather | -internal_gather | -
2 | -cluster_name | -cluster_name | -cluster_name | -alias=scheduler | -alias=api-server | -cluster_name | -
3 | -host | -go_version | -host | -cluster_name | -cluster_name | -host | -
4 | -internal_ip | -host | -internal_ip | -host | -host | -input | -
5 | -frees | -internal_ip | -output | -input | -input | -internal_ip | -
6 | -heap_alloc_bytes | -version | -version | -internal_ip | -internal_ip | -version | -
7 | -heap_sys_bytes | -metrics_written | -metrics_written | -version | -version | -metrics_gathered | -
8 | -alloc_bytes | -metrics_dropped | -metrics_dropped | -errors | -errors | -gather_time_ns | -
9 | -mallocs | -metrics_gathered | -buffer_size | -metrics_gathered | -metrics_gathered | -errors | -
10 | -heap_in_use_bytes | -gather_errors | -buffer_limit | -gather_time_ns | -gather_time_ns | -- |
11 | -heap_released_bytes | -- | metrics_filtered | -- | - | - |
12 | -heap_objects | -- | write_time_ns | -- | - | - |
13 | -total_alloc_bytes | -- | errors | -- | - | - |
14 | -pointer_lookups | -- | metrics_added | -- | - | - |
Metric Type | -Example Output | -
---|---|
Memory | -
- internal_memstats,cluster_name=aaa,host=859bffa0-eeab-4f81-b664-53ca5a9fb756,internal_ip=30.0.0.9 frees=464168i,
- heap_alloc_bytes=20156736i,heap_sys_bytes=28868608i,alloc_bytes=20156736i,sys_bytes=37897224i,mallocs=615347i,
- heap_idle_bytes=7274496i,heap_in_use_bytes=21594112i,heap_released_bytes=2621440i,heap_objects=151179i,num_gc=7i,
- total_alloc_bytes=50755200i,pointer_lookups=0i 1660534850000000000
- |
-
Agent | -
- internal_agent,cluster_name=aaa,go_version=1.17,host=859bffa0-eeab-4f81-b664-53ca5a9fb756,internal_ip=30.0.0.9,
- version=1.20.2 metrics_written=3408i,metrics_dropped=0i,metrics_gathered=3551i,gather_errors=0i 1660534850000000000
- |
-
Write | -
- internal_write,cluster_name=aaa,host=859bffa0-eeab-4f81-b664-53ca5a9fb756,internal_ip=30.0.0.9,output=file,
- version=1.20.2 metrics_written=3408i,metrics_dropped=0i,buffer_size=142i,buffer_limit=200000i,metrics_filtered=0i,
- write_time_ns=685575i,errors=0i,metrics_added=3550i 1660534850000000000
- |
-
Scheduler | -
- internal_gather,alias=scheduler,cluster_name=aaa,host=859bffa0-eeab-4f81-b664-53ca5a9fb756,input=prometheus,
- internal_ip=30.0.0.9,version=1.20.2 errors=0i,metrics_gathered=3425i,gather_time_ns=18387320i 1660534850000000000
- |
-
API Server | -
- internal_gather,alias=api-server,cluster_name=aaa,host=859bffa0-eeab-4f81-b664-53ca5a9fb756,input=prometheus,
- internal_ip=30.0.0.9,version=1.20.2 errors=0i,metrics_gathered=2747i,gather_time_ns=140194058i 1660546440000000000
- |
-
Internal | -
- internal_gather,cluster_name=aaa,host=859bffa0-eeab-4f81-b664-53ca5a9fb756,input=internal,internal_ip=30.0.0.9,
- version=1.20.2 metrics_gathered=126i,gather_time_ns=1265947i,errors=0i 1660534850000000000
- |
-
Note: -<%= vars.k8s_runtime_abbr %> supports NSX Management Plane API to NSX Policy API Migration only. -You cannot return <%= vars.k8s_runtime_abbr %> to the NSX Management Plane API -after starting <%= vars.k8s_runtime_abbr %> MP2P Migration. +TKGI supports NSX Management Plane API to NSX Policy API Migration only. +You cannot return TKGI to the NSX Management Plane API +after starting TKGI MP2P Migration.
Warning: -Limit upgrading NSX and <%= vars.k8s_runtime_abbr %> to only resolving critical issues while your environment is in MP2P Migration mixed-mode. +Limit upgrading NSX and TKGI to only resolving critical issues while your environment is in MP2P Migration mixed-mode.
@@ -172,7 +168,7 @@ documents the recommended two-step MP2P firewall migration procedure for typical * Re-create the bottom firewall rules immediately after promoting clusters. Adhering to the recommended DFW migration sequence is critical to maintaining security and cluster workload network connectivity. -If a <%= vars.k8s_runtime_abbr %> cluster is: +If a TKGI cluster is: * Promoted before existing Management Plane API top firewall rules have been recreated in the Policy API section: Cluster network policies will be enforced before the top firewall rules. @@ -213,69 +209,3 @@ Clusters that are not actively being promoted: The amount of time it takes to promote a cluster depends on the scale of resources NSX needs to migrate the cluster, and the time it takes to update the cluster. - -<% if vars.product_version == "COMMENTED" %> -* Takes about the same amount of time to complete as `tkgi update-cluster` takes for the cluster being promoted. -<% end %> - - -<% if vars.product_version == "COMMENTED" %> -Warning: -Limit upgrading NSX and <%= vars.k8s_runtime_abbr %> to only resolving critical issues while your environment is in MP2P Migration mixed-mode. +Limit upgrading NSX and TKGI to only resolving critical issues while your environment is in MP2P Migration mixed-mode.
Note: Retain the exported back up for use at the end of <%= vars.k8s_runtime_abbr %> MP2P Migration. -
- -1. Back up the <%= vars.k8s_runtime_abbr %> environment, including workloads on all clusters. -For more information, see [Backing Up and Restoring Tanzu Kubernetes Grid Integrated Edition](backup-and-restore.html). -1. Back up NSX. -<% end %> - -Note: -After configuring the <%= vars.k8s_runtime_abbr %> tile, newly created clusters use the NSX Policy API. +After configuring the TKGI tile, newly created clusters use the NSX Policy API.
- -<% if vars.product_version == "COMMENTED" %> -To reconfigure your <%= vars.k8s_runtime_abbr %> environment for NSX Policy API: - -* Configure Ops Manager -* Configure <%= vars.k8s_runtime_abbr %> - -**Configure Ops Manager** - -To configure the <%= vars.k8s_runtime_abbr %> tile with the policy object IDs created by test cluster migration: - -< % if vars.product_version == "v1.15" % > -1. To update Ops Manager to Policy API mode: - - 1. SSH to the Ops Manager host. - 1. Fill in Policy API properties using the Ops Manager CLI: - - ``` - - ``` - -< % else % > -1. To update Ops Manager to Policy API mode: - - * Ops Manager v3.0 and later: - - 1. Open the Ops Manager UI. - 1. Select **Use NSX Policy API**. - 1. (OPS MAN SAVE STEP?) - - * Ops Manager v2.10.45 and later: - - 1. Fill in Policy API properties using the Ops Manager CLI. - -**Configure <%= vars.k8s_runtime_abbr %>** - - -1. To reconfigure <%= vars.k8s_runtime_abbr %> with Resource Policy IDs: - - 1. Open the <%= vars.k8s_runtime_abbr %> tile UI. - 1. Open the **Networking** tab. - 1. Activate **Policy API mode**. - 1. Replace the Management Plane API IDs with the retained Resource Policy IDs: - - * **Pods IP Block ID** - * **Nodes IP Block ID** - * **T0 Router ID** - * **Floating IP Pool ID** - - 1. Select **Apply Changes**. - -Note: -After configuring Ops Manager and the <%= vars.k8s_runtime_abbr %> tile, newly created cluster use the NSX Policy API. -
-<% end %> - -<% if vars.product_version == "COMMENTED" %> -**<%= vars.k8s_runtime_abbr %> Tile** - -To configure the <%= vars.k8s_runtime_abbr %> tile with the policy object IDs created by test cluster migration: - -1. Open the <%= vars.k8s_runtime_abbr %> tile to XXXX > XXXX. -1. Update <%= vars.k8s_runtime_abbr %> tile to Policy API mode: - - 1. Fill in the <%= vars.k8s_runtime_abbr %> tile UI with Policy ID. - 1. Select **Policy Mode**. - 1. Select **Apply Changes**. - -<% end %> - -Note: -Do not intentionally run <%= vars.k8s_runtime_abbr %> in mixed mode for an extended period of time. -Promote all <%= vars.k8s_runtime_abbr %> clusters to NSX Policy API as quickly as possible. +Do not intentionally run TKGI in mixed mode for an extended period of time. +Promote all TKGI clusters to NSX Policy API as quickly as possible.
Note: - Do not attempt to promote an additional <%= vars.k8s_runtime_abbr %> cluster to NSX Policy API + Do not attempt to promote an additional TKGI cluster to NSX Policy API before completing the promotion of the current clusters.
@@ -551,18 +422,12 @@ To migrate an individual cluster from the NSX Management Plane API to NSX Policy Do not restart cluster promotion for this cluster. Review the `mp_to_policy_importer` logs and confirm NCP can be manually restarted on all master VMs in Management Plane API mode. -<% if vars.product_version == "COMMENTED" %> - If the cluster did not successfully migrate to the NSX Policy API, - see [Failure and Recovery](https://docs.vmware.com/en/VMware-NSX-Container-Plugin/4.0/ncp-kubernetes/GUID-E4BA4AA9-0E7A-47AA-909F-24B4E4B621C8.html) - in the VMware NSX Container Plugin documentation. -<% end %> - For more information on cluster limitations while promoting a cluster, see [During Cluster Promotion to NSX Policy API](mp2p-migration-concepts.html#concerns-limitations-during-cluster) in _Migrating the NSX Management Plane API to NSX Policy API - Overview_.Note: -Do not attempt to promote an additional <%= vars.k8s_runtime_abbr %> cluster to NSX Policy API +Do not attempt to promote an additional TKGI cluster to NSX Policy API before completing the promotion of the current clusters.
@@ -596,9 +461,9 @@ in _Migrating the NSX Management Plane API to NSX Policy API - Overview_.Note: If a cluster manager, pks.clusters.manage
, attempts to create or delete a network profile,
the following error occurs: "You do not have enough privileges to perform this action.
- Please contact the <%= vars.k8s_runtime_abbr %> administrator."
+ Please contact the TKGI administrator."
WARNING: Update the network profile only on a <%= vars.k8s_runtime_abbr %> cluster that has been upgraded to the current <%= vars.k8s_runtime_abbr %> version. For more information, see Tasks Supported Following a <%= vars.k8s_runtime_abbr %> Control Plane Upgrade in About <%= vars.product_short %> Upgrades. +
WARNING: Update the network profile only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
ingress_prefix |
String | Ingress controller hostname prefix for DNS lookup.
- If DNS mode is set to API_INGRESS , <%= vars.k8s_runtime_abbr %> creates the cluster with
+ If DNS mode is set to API_INGRESS , TKGI creates the cluster with
ingress_prefix.hostname as the Kubernetes control plane FQDN.
- <%= vars.k8s_runtime_abbr %> confirms that the ingress subdomain can be resolved as a subdomain prefix on the host before creating new clusters.
+ TKGI confirms that the ingress subdomain can be resolved as a subdomain prefix on the host before creating new clusters.
|
||||||
ncp.nsx_v3.http_timeout | Integer Updatable | The time in seconds before aborting a HTTP connection to a NSX Manager.
Default: 10 . |
ncp.nsx_v3.k8s_np_use_ip_sets | Boolean | -Values: TRUE , FALSE .
- Default: TRUE .
- Must set to TRUE for NSX Management Plane API.
- Must set to FALSE for NSX Policy API. |
ncp.nsx_v3.l4_lb_auto_scaling | Boolean Updatable |
L4 load balancer auto scaling mode.
Values: TRUE , FALSE .
@@ -828,7 +821,7 @@ In your network profile under `parameters.cni_configurations.parameters.extensio
}
```
-**Filter out TAP labels**: If you are using Tanzu Application Platform (TAP), <%= vars.k8s_runtime_abbr %> and TAP together may create more Kubernetes tags than are allowed by NSX. To address this known issue, set `label_filtering_regex_list` to filter out labels generated by TAP:
+**Filter out TAP labels**: If you are using Tanzu Application Platform (TAP), TKGI and TAP together may create more Kubernetes tags than are allowed by NSX. To address this known issue, set `label_filtering_regex_list` to filter out labels generated by TAP:
```
"ncp": {
@@ -890,7 +883,7 @@ The primary use case for configuring the network profile CNI configuration `exte
is to configure the less commonly configured NCP and NSX Node Agent settings.
Use the network profiles `extensions` field to configure an NCP ConfigMap or NSX Node Agent ConfigMap property
-that is applicable to <%= vars.k8s_runtime_abbr %> but is not explicitly supported as a `cni_configurations` parameter.
+that is applicable to TKGI but is not explicitly supported as a `cni_configurations` parameter.
NCP and NSX Node Agent settings supported as explicit Network Profiles parameters cannot be configured through extensions.
@@ -925,8 +918,8 @@ To add NSX floating IP pool UUIDs to a cluster:
For example, do not create a copy of a network profile, remove `fip_pool_ids` array values,
and assign the new profile to the cluster that has the original profile assigned.
-Note: <%= vars.k8s_runtime_abbr %> allocates IP Addresses from the start of the floating IP pool range. - To avoid conflicts with internal <%= vars.k8s_runtime_abbr %> functions, always use IP addresses from the end of the floating IP pool. For more information, see + Note: TKGI allocates IP Addresses from the start of the floating IP pool range. + To avoid conflicts with internal TKGI functions, always use IP addresses from the end of the floating IP pool. For more information, see Failed to Allocate FIP from Pool in General Troubleshooting. @@ -991,7 +984,7 @@ For more information on the `pod_ip_block_ids` field, see [Network Profile Param ### Network Profile Use Cases Network profiles let you customize configuration parameters for -Kubernetes clusters provisioned by <%= vars.k8s_runtime_abbr %> on vSphere with NSX. +Kubernetes clusters provisioned by TKGI on vSphere with NSX. You can apply a network profile to a Kubernetes cluster for the following scenarios: diff --git a/network-profiles-dns.html.md.erb b/network-profiles-dns.html.md.erb index 0c7663854..326b88b79 100644 --- a/network-profiles-dns.html.md.erb +++ b/network-profiles-dns.html.md.erb @@ -3,34 +3,34 @@ title: Configure DNS for Pre-Provisioned IPs owner: TKGI --- -This topic describes how to define network profile for performing DNS lookup of the pre-provisioned IP addresses for the Kubernetes API load balancer and ingress controller for <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) provisioned Kubernetes clusters. +This topic describes how to define network profile for performing DNS lookup of the pre-provisioned IP addresses for the Kubernetes API load balancer and ingress controller for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) provisioned Kubernetes clusters. ## About DNS Lookup of Pre-Provisioned IP Addresses -In an <%= vars.product_short %> environment on NSX, when you provision a Kubernetes cluster using the command `tkgi create-cluster`, NSX creates a layer 4 load balancer that fronts the Kubernetes API server running on the control plane node(s). In addition, NCP creates two layer 7 virtual servers (HTTP and HTTPS) as front-end load balancers for the ingress resources in Kubernetes servers. +In an Tanzu Kubernetes Grid Integrated Edition environment on NSX, when you provision a Kubernetes cluster using the command `tkgi create-cluster`, NSX creates a layer 4 load balancer that fronts the Kubernetes API server running on the control plane node(s). In addition, NCP creates two layer 7 virtual servers (HTTP and HTTPS) as front-end load balancers for the ingress resources in Kubernetes servers. The IP addresses that are assigned to the API load balancer and ingress controller are derived from the floating IP pool in NSX. These IP addresses are not known in advance, and you have to wait for the IP addresses to be allocated to know what they are so you can update your DNS records. -If you want to pre-provision these IP addresses, you define a network profile to lookup the IP addresses for these components from your DNS server. In this way you can tell <%= vars.k8s_runtime_abbr %> what IP addresses to use for these resources when the cluster is created, and be able to have DNS records for them so FQDNs can be used. +If you want to pre-provision these IP addresses, you define a network profile to lookup the IP addresses for these components from your DNS server. In this way you can tell TKGI what IP addresses to use for these resources when the cluster is created, and be able to have DNS records for them so FQDNs can be used. ## DNS Lookup Parameters -Using the `dns_lookup_mode` parameter, you can define a network profile to specify the lookup mode: `API` or `API_INGRESS`. If the mode is `API`, <%= vars.k8s_runtime_abbr %> will perform a lookup of the pre-provisioned IP address for the Kubernetes API load balancer. If the mode is `API_INGRESS`, <%= vars.k8s_runtime_abbr %> will perform a lookup of the pre-provisioned IP addresses for the Kubernetes API load balancer and the ingress controller. +Using the `dns_lookup_mode` parameter, you can define a network profile to specify the lookup mode: `API` or `API_INGRESS`. If the mode is `API`, TKGI will perform a lookup of the pre-provisioned IP address for the Kubernetes API load balancer. If the mode is `API_INGRESS`, TKGI will perform a lookup of the pre-provisioned IP addresses for the Kubernetes API load balancer and the ingress controller. -The IP addresses used must come from the floating IP pool. The floating IP pool comes from the <%= vars.k8s_runtime_abbr %> tile configuration unless specified in the network profile. +The IP addresses used must come from the floating IP pool. The floating IP pool comes from the TKGI tile configuration unless specified in the network profile. -Note: <%= vars.k8s_runtime_abbr %> allocates IP Addresses from the start of the floating IP pool range. - To avoid conflicts with internal <%= vars.k8s_runtime_abbr %> functions, always use IP addresses from the end of the floating IP pool. + Note: TKGI allocates IP Addresses from the start of the floating IP pool range. + To avoid conflicts with internal TKGI functions, always use IP addresses from the end of the floating IP pool. -The DNS lookup, whether for the Kubernetes control plane node(s) load balancer or the ingress controller, is performed in the Kubernetes control plane VM using the DNS server(s) configured in the <%= vars.k8s_runtime_abbr %> tile or the `nodes_dns` field in the network profile. +The DNS lookup, whether for the Kubernetes control plane node(s) load balancer or the ingress controller, is performed in the Kubernetes control plane VM using the DNS server(s) configured in the TKGI tile or the `nodes_dns` field in the network profile. You cannot modify the DNS lookup mode configuration on an existing cluster. ## Example API Load Balancer Lookup -The following network profile, api.json, triggers a DNS lookup for the Kubernetes control plane node(s) IP address. In this example, a custom floating IP pool is specified, and DNS servers. If these parameters are not specified, the values in the <%= vars.k8s_runtime_abbr %> tile are used. +The following network profile, api.json, triggers a DNS lookup for the Kubernetes control plane node(s) IP address. In this example, a custom floating IP pool is specified, and DNS servers. If these parameters are not specified, the values in the TKGI tile are used. ``` { @@ -75,9 +75,9 @@ Where: * `FIP-POOL-ID1` and `FIP-POOL-ID2` are Floating IP Pool IDs. * `INGRESS-SUBDOMAIN` is the ingress subdomain prefix. -Because DNS mode is set to `API_INGRESS`, <%= vars.k8s_runtime_abbr %> creates the cluster with ingress_prefix.hostname +Because DNS mode is set to `API_INGRESS`, TKGI creates the cluster with ingress_prefix.hostname as the Kubernetes control plane FQDN. -<%= vars.k8s_runtime_abbr %> confirms that the ingress subdomain can be resolved as a subdomain prefix on the host before creating new clusters. +TKGI confirms that the ingress subdomain can be resolved as a subdomain prefix on the host before creating new clusters. ## Setting the Control Plane Node IP Address on the Command Line @@ -98,6 +98,6 @@ $ tkgi create-cluster my-cluster -e 192.168.160.20 -p small The IP address that you use must belong to a valid floating IP pool created in NSX. -Note: <%= vars.k8s_runtime_abbr %> allocates IP Addresses from the start of the floating IP pool range. - To avoid conflicts with internal <%= vars.k8s_runtime_abbr %> functions, always use IP addresses from the end of the floating IP pool. + Note: TKGI allocates IP Addresses from the start of the floating IP pool range. + To avoid conflicts with internal TKGI functions, always use IP addresses from the end of the floating IP pool. diff --git a/network-profiles-edge.html.md.erb b/network-profiles-edge.html.md.erb index 0fb5b7272..6db72e10a 100644 --- a/network-profiles-edge.html.md.erb +++ b/network-profiles-edge.html.md.erb @@ -3,13 +3,13 @@ title: Configure Edge Router Selection owner: TKGI --- -This topic describes how to define network profiles for <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) provisioned Kubernetes clusters on vSphere with NSX. +This topic describes how to define network profiles for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) provisioned Kubernetes clusters on vSphere with NSX. ## Edge Router Selection -Using <%= vars.product_short %> on vSphere with NSX, you can deploy Kubernetes clusters on dedicated Tier-0 routers, +Using Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX, you can deploy Kubernetes clusters on dedicated Tier-0 routers, creating a multi-tenant environment for each Kubernetes cluster. -As shown in the diagram below, with this configuration a shared Tier-0 router hosts the <%= vars.k8s_runtime_abbr %> control plane +As shown in the diagram below, with this configuration a shared Tier-0 router hosts the TKGI control plane and connects to each customer Tier-0 router using BGP. To support multi-tenancy, configure firewall rules and security settings in NSX Manager.![]() Note: <%= vars.k8s_runtime_abbr %> allocates IP Addresses from the start of the floating IP pool range. - To avoid conflicts with internal <%= vars.k8s_runtime_abbr %> functions, always use IP addresses from the end of the floating IP pool. For more information, see + Note: TKGI allocates IP Addresses from the start of the floating IP pool range. + To avoid conflicts with internal TKGI functions, always use IP addresses from the end of the floating IP pool. For more information, see Failed to Allocate FIP from Pool in General Troubleshooting. To define a custom floating IP pool, follow the steps below: -1. Create a floating IP pool using NSX Manager prior to provisioning a Kubernetes cluster using <%= vars.product_short %>. For more information, see [Create IP Pool](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.admin.doc/GUID-8639F737-1D75-4177-9D31-5F20551DEE8E.html) in the NSX documentation. +1. Create a floating IP pool using NSX Manager prior to provisioning a Kubernetes cluster using Tanzu Kubernetes Grid Integrated Edition. For more information, see [Create IP Pool](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.admin.doc/GUID-8639F737-1D75-4177-9D31-5F20551DEE8E.html) in the NSX documentation. 1. Ensure routing to your external Tier-0 Router allows traffic to the new custom Floating IP subnet. 1. Define a network profile with a `fip_pool_ids` array containing the UUIDs for the floating IP pools that you defined. If you want to include the default floating IP pool, diff --git a/network-profiles-index.html.md.erb b/network-profiles-index.html.md.erb index e89e3b898..aaa236885 100644 --- a/network-profiles-index.html.md.erb +++ b/network-profiles-index.html.md.erb @@ -3,7 +3,7 @@ title: Network Profiles (VMware NSX Only) owner: TKGI --- -The following topics describe how to define and use network profiles for <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) provisioned Kubernetes clusters deployed on NSX with vSphere: +The following topics describe how to define and use network profiles for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) provisioned Kubernetes clusters deployed on NSX with vSphere:
|
Note: If you use the default Transport Zones, but do not use the exact name nsxHostSwitch
when configuring NSX on the Edge Node, you will receive the pks-nsx-t-osb-proxy
BOSH error when you try to deploy <%= vars.k8s_runtime_abbr %>.
Note: If you use the default Transport Zones, but do not use the exact name nsxHostSwitch
when configuring NSX on the Edge Node, you will receive the pks-nsx-t-osb-proxy
BOSH error when you try to deploy TKGI.
Note: The NSX-T 3.x Edge Node configuration displays the following message beside the Edge Switch Name field: "The switch name value need not be identical to host switch name associated with the Transport Zone." - This message does not apply to <%= vars.k8s_runtime_abbr %>.
+ This message does not apply to TKGI. If there is a mismatch between the the host switch name associated with the Transport Zone and the **Edge Switch Name**, -<%= vars.k8s_runtime_abbr %> installation fails with the following error: +TKGI installation fails with the following error: ``` Failed to get NSX provisioning properties: No transport zone with overlay type found in transport node as switch name is not same across the TZ and ESXI TN diff --git a/nsxt-certs-rotate.html.md.erb b/nsxt-certs-rotate.html.md.erb index 8cc3ad3d3..a153e16c7 100644 --- a/nsxt-certs-rotate.html.md.erb +++ b/nsxt-certs-rotate.html.md.erb @@ -3,7 +3,7 @@ title: Rotate VMware NSX Certificates for Kubernetes Clusters owner: PKS-NSXT --- -This topic describes how to list and rotate TLS certificates for Kubernetes clusters provisioned by <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>). +This topic describes how to list and rotate TLS certificates for Kubernetes clusters provisioned by VMware Tanzu Kubernetes Grid Integrated Edition (TKGI). ## About NSX Certificate Rotation for Kubernetes Clusters Provisioned by TKGI @@ -58,7 +58,7 @@ To list the TLS certificates created for a TKGI-provisioned Kubernetes cluster: To rotate the TLS certificates for NSX: -1. To skip SSL verification during the certificate rotation, you must first deactivate SSL verification on the <%= vars.k8s_runtime_abbr %> tile. +1. To skip SSL verification during the certificate rotation, you must first deactivate SSL verification on the TKGI tile. For more information, see the **Disable SSL certification verification** configuration instructions in [Networking](installing-nsx-t.html#networking). 1. Run the following: @@ -87,7 +87,7 @@ For more information, see the **Disable SSL certification verification** configu You are about to rotate nsx related certificates for cluster tkgi-cluster-01. This operation requires bosh deployment, and will take a significant time. Are you sure you want to continue? (y/n): ``` -WARNING: Rotate cluster certificates only on a <%= vars.k8s_runtime_abbr %> cluster that has been upgraded to the current <%= vars.k8s_runtime_abbr %> version. For more information, see Tasks Supported Following a <%= vars.k8s_runtime_abbr %> Control Plane Upgrade in About <%= vars.product_short %> Upgrades. +
WARNING: Rotate cluster certificates only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
1. If running `tkgi rotate-certs` fails to rotate the certificates, you must manually rotate the certificates. diff --git a/nsxt-create-objects.html.md.erb b/nsxt-create-objects.html.md.erb index cf929327c..a1d77c595 100644 --- a/nsxt-create-objects.html.md.erb +++ b/nsxt-create-objects.html.md.erb @@ -3,11 +3,11 @@ title: Creating VMware NSX Objects for Tanzu Kubernetes Grid Integrated Edition owner: TKGI --- -This topic describes how to create VMware NSX Objects for <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>). +This topic describes how to create VMware NSX Objects for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI). ##Overview -Installing <%= vars.product_full %> on vSphere with NSX requires the creation of NSX IP blocks for Kubernetes node and pod networks, as well as a Floating IP Pool from which you can assign routable IP addresses to cluster resources. +Installing VMware Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX requires the creation of NSX IP blocks for Kubernetes node and pod networks, as well as a Floating IP Pool from which you can assign routable IP addresses to cluster resources. Create separate NSX IP Blocks for the [node networks](./nsxt-prepare-env.html#nodes-ip-block) and the [pod networks](./nsxt-prepare-env.html#pods-ip-block), with subnets of size 256 (/16) for both nodes and pods. @@ -16,10 +16,10 @@ and [Reserved IP Blocks](./nsxt-prepare-env.html#reserved-ip-blocks). For more information about NSX-T IP Blocks, see [Advanced IP Address Management](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.5/administration/GUID-A1254321-0C17-4458-BCA9-53FBEFCE98C3.html) in the _VMware NSX-T Data Center_ documentation. - * **NODE-IP-BLOCK** is used by <%= vars.product_short %> to assign address space to Kubernetes control plane and worker nodes when new clusters are deployed or a cluster increases its scale. + * **NODE-IP-BLOCK** is used by Tanzu Kubernetes Grid Integrated Edition to assign address space to Kubernetes control plane and worker nodes when new clusters are deployed or a cluster increases its scale. * **POD-IP-BLOCK** is used by the NSX Container Plug-in (NCP) to assign address space to Kubernetes pods through the Container Networking Interface (CNI). -In addition, create a Floating IP Pool from which to assign routable IP addresses to components. This network provides your load balancing address space for each Kubernetes cluster created by <%= vars.product_short %>. The network also provides IP addresses for Kubernetes API access and Kubernetes exposed services. For example, `10.172.2.0/24` provides 256 usable IPs. This network is used when creating the virtual IP pools, or when the services are deployed. You enter this network in the **Floating IP Pool ID** field in the **Networking** pane of the <%= vars.product_tile %> tile. +In addition, create a Floating IP Pool from which to assign routable IP addresses to components. This network provides your load balancing address space for each Kubernetes cluster created by Tanzu Kubernetes Grid Integrated Edition. The network also provides IP addresses for Kubernetes API access and Kubernetes exposed services. For example, `10.172.2.0/24` provides 256 usable IPs. This network is used when creating the virtual IP pools, or when the services are deployed. You enter this network in the **Floating IP Pool ID** field in the **Networking** pane of the Tanzu Kubernetes Grid Integrated Edition tile. Complete the following instructions to create the required NSX network objects. @@ -36,7 +36,7 @@ Complete the following instructions to create the required NSX network objects. 1. Verify creation of the Nodes IP Block.Note: The Linux VM must have OpenSSL installed and have network access to the NSX Manager. For example, you can use the <%= vars.k8s_runtime_abbr %> client VM where you install the <%= vars.k8s_runtime_abbr %> CLI.
+Note: The Linux VM must have OpenSSL installed and have network access to the NSX Manager. For example, you can use the TKGI client VM where you install the TKGI CLI.
#### Step 1: Generate and Register the Certificate and Key @@ -85,7 +85,7 @@ You must generate a certificate and private key, and create the Super User Princ To create the Super User Principal Identity, create and run the `create_certificate_pi.sh` script: -1. Log in to a Linux VM in your <%= vars.product_short %> environment. +1. Log in to a Linux VM in your Tanzu Kubernetes Grid Integrated Edition environment. 1. Create an empty file using `vi create_certificate_pi.sh` or `nano create_certificate_pi.sh`. 1. Modify the file you created to have the following script contents: @@ -165,19 +165,19 @@ with the role `Enterprise Admin` on the NSX Manager **System** > **Users** > **R [View a larger version of this image.](images/nsxt/nsx-create_pi-result.png) -### Option B: Generate and Register the Certificate and Key Using the <%= vars.product_tile %> Tile +### Option B: Generate and Register the Certificate and Key Using the Tanzu Kubernetes Grid Integrated Edition Tile #### Step 1: Generate the Certificate and Key -To generate the certificate and key automatically in the **Networking** pane in the <%= vars.product_tile %> tile, follow the steps below: +To generate the certificate and key automatically in the **Networking** pane in the Tanzu Kubernetes Grid Integrated Edition tile, follow the steps below: -1. Navigate to the **Networking** pane in the <%= vars.product_tile %> tile. For more information, see [Networking](installing-nsx-t.html#networking) in _Installing <%= vars.product_short %> on vSphere with NSX Integration_. +1. Navigate to the **Networking** pane in the Tanzu Kubernetes Grid Integrated Edition tile. For more information, see [Networking](installing-nsx-t.html#networking) in _Installing Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX Integration_. 1. Click **Generate RSA Certificate** and provide a wildcard domain. For example, `*.nsx.tkgi.vmware.local`. #### Step 2: Copy the Certificate and Key to the Linux VM To copy the certificate and key you generated to a Linux VM, follow the steps below: -Note: The Linux VM must have OpenSSL installed and have network access to the NSX Manager. For example, you can use the <%= vars.k8s_runtime_abbr %> client VM where you install the <%= vars.k8s_runtime_abbr %> CLI.
+Note: The Linux VM must have OpenSSL installed and have network access to the NSX Manager. For example, you can use the TKGI client VM where you install the TKGI CLI.
1. On the Linux VM you want to use to register the certificate, create a file named `pks-nsx-t-superuser.crt`. Copy the generated certificate into the file. 1. On the Linux VM you want to use to register the key, create a file named `pks-nsx-t-superuser.key`. Copy the generated private key into the file. @@ -240,5 +240,5 @@ To rotate the NSX Principal Identity super user certificate, see ## Next Installation Step -If you have completed this procedure as part of installing <%= vars.k8s_runtime_abbr %> for the first time, -proceed to Installing <%= vars.k8s_runtime_abbr %> on vSphere with NSX. +If you have completed this procedure as part of installing TKGI for the first time, +proceed to Installing TKGI on vSphere with NSX. diff --git a/nsxt-health.html.md.erb b/nsxt-health.html.md.erb index 9a14fe05e..6b133208d 100644 --- a/nsxt-health.html.md.erb +++ b/nsxt-health.html.md.erb @@ -3,11 +3,11 @@ title: Viewing and Troubleshooting the Health Status of Cluster Network Objects owner: TKGI-NSX --- -This topic describes how cluster managers and users can troubleshoot NSX networking errors using the `kubectl nsxerrors` command for <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>). +This topic describes how cluster managers and users can troubleshoot NSX networking errors using the `kubectl nsxerrors` command for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI). ## About the NSX Errors CRD -The NSX Errors CRD gives you the ability to view errors related to NSX that might occur when applications are deployed to a <%= vars.k8s_runtime_abbr %>-provisioned Kubernetes cluster. Previously, NSX errors were logged in NCP logs on the control plane nodes, which cluster users do not have access to. The NSX Errors CRD improves visibility and troubleshooting for cluster managers and users. +The NSX Errors CRD gives you the ability to view errors related to NSX that might occur when applications are deployed to a TKGI-provisioned Kubernetes cluster. Previously, NSX errors were logged in NCP logs on the control plane nodes, which cluster users do not have access to. The NSX Errors CRD improves visibility and troubleshooting for cluster managers and users. The NSX Errors CRD creates a `nsxerror` object for each Kubernetes resource that encounters an NSX error during attempted creation. In addition, the Kubernetes resource is annotated with the `nsxerror` object name. The NSX Error CRD provides the command `kubectl nsxerrors` that lets you view the NSX errors encountered during resource creation. The `nsxerror` object is deleted once the NSX error is resolved and the Kubernetes resource is successfully created. diff --git a/nsxt-ingress-monitor.html.md.erb b/nsxt-ingress-monitor.html.md.erb index 67082a54b..c56ec5977 100644 --- a/nsxt-ingress-monitor.html.md.erb +++ b/nsxt-ingress-monitor.html.md.erb @@ -4,7 +4,7 @@ owner: TKGI-NSX lbtype: monitor --- -This topic describes how to monitor the health status of the NSX ingress load balancer resources for <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>). +This topic describes how to monitor the health status of the NSX ingress load balancer resources for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI).Note: This feature requires NCP v2.5.1 or later.
diff --git a/nsxt-ingress-rewrite-url.html.md.erb b/nsxt-ingress-rewrite-url.html.md.erb index 9801aff15..1aae5688e 100644 --- a/nsxt-ingress-rewrite-url.html.md.erb +++ b/nsxt-ingress-rewrite-url.html.md.erb @@ -3,11 +3,11 @@ title: Using Ingress URL Rewrite owner: TKGI-NSX --- -This topic describes how to perform URL rewrite for Kubernetes ingress resources for <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>). +This topic describes how to perform URL rewrite for Kubernetes ingress resources for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI). ## About Support for URL Rewrite for Ingress Resources -<%= vars.product_short %> supports ingress URL path rewrite using NSX-T or NSX v2.5.1+ and NCP v2.5.1+. +Tanzu Kubernetes Grid Integrated Edition supports ingress URL path rewrite using NSX-T or NSX v2.5.1+ and NCP v2.5.1+. All the ingress paths will be rewritten to the provided value. If an ingress has annotation `ingress.kubernetes.io/rewrite-target: /` and has path `/tea`, for example, the URI `/tea` will be rewritten to `/` before the request is sent to the backend service. Numbered capture groups are supported. diff --git a/nsxt-ingress-scale.html.md.erb b/nsxt-ingress-scale.html.md.erb index 4858641b7..3b8caf336 100644 --- a/nsxt-ingress-scale.html.md.erb +++ b/nsxt-ingress-scale.html.md.erb @@ -4,7 +4,7 @@ owner: TKGI-NSX lbtype: layer7 --- -This topic describes how to scale ingress resources for <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>). +This topic describes how to scale ingress resources for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI).Note: This feature requires NCP v2.5.1 or later.
diff --git a/nsxt-ingress-srvc-lb.html.md.erb b/nsxt-ingress-srvc-lb.html.md.erb index ad17ce13b..f48430be2 100644 --- a/nsxt-ingress-srvc-lb.html.md.erb +++ b/nsxt-ingress-srvc-lb.html.md.erb @@ -3,7 +3,7 @@ title: Configuring Ingress Resources and Load Balancer Services owner: TKGI-NSX --- -This topic describes example ingress routing (Layer 7) and load balancing (Layer 4) configurations for Kubernetes clusters deployed by <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) on vSphere with NSX integration. +This topic describes example ingress routing (Layer 7) and load balancing (Layer 4) configurations for Kubernetes clusters deployed by VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) on vSphere with NSX integration.Note: The examples in this topic are based on NCP v2.3.2.
@@ -92,7 +92,7 @@ see [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) NSX supports autoscaling, which spins up a new Kubernetes `type: LoadBalancer` service if the previous one has reached its scale limit. The NSX load balancer that is -automatically provisioned by <%= vars.product_short %> provides two Layer 7 virtual servers +automatically provisioned by Tanzu Kubernetes Grid Integrated Edition provides two Layer 7 virtual servers for Kubernetes ingress resources, one for HTTP and the other for HTTPS. For more information, see [Supported Load Balancer Features](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.admin.doc/GUID-91F2D574-F469-481A-AA39-CD6DBC9682CA.html) @@ -132,7 +132,7 @@ For example, `8080` or `http`. Kubernetes requires the port name be specified for multi-port services. For example, the following is a `LoadBalancer` service definition for an -<%= vars.product_short %>-provisioned cluster with NSX: +Tanzu Kubernetes Grid Integrated Edition-provisioned cluster with NSX: ``` kind: Service diff --git a/nsxt-ingress.html.md.erb b/nsxt-ingress.html.md.erb index 9dcc0c3d7..336d49e77 100644 --- a/nsxt-ingress.html.md.erb +++ b/nsxt-ingress.html.md.erb @@ -4,7 +4,7 @@ owner: TKGI iaas: vsphere-nsxt --- -The following topics describe how to configure the NSX load balancer used for ingress resources for <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>). +The following topics describe how to configure the NSX load balancer used for ingress resources for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI). Layer 7 load balancing is implemented via a Kubernetes ingress resource. The ingress is allocated an IP from the Floating IP Pool specified in the NSX configuration. NCP exposes the ingress load balancer service on this IP address for both the HTTP and HTTPS ports (port 80 and 443). diff --git a/nsxt-install-edges.html.md.erb b/nsxt-install-edges.html.md.erb index 72e252f08..5c395c1ef 100644 --- a/nsxt-install-edges.html.md.erb +++ b/nsxt-install-edges.html.md.erb @@ -3,7 +3,7 @@ title: Install and Configure the NSX Edge Nodes owner: TKGI-NSXT --- -This topic describes how to deploy and configure NSX-T Data Center v3.0 NSX-T Edge Nodes for use with <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) on vSphere. +This topic describes how to deploy and configure NSX-T Data Center v3.0 NSX-T Edge Nodes for use with VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) on vSphere. ## Prerequisites @@ -34,18 +34,18 @@ Before completing this section, make sure you have completed the following secti In this section you deploy two NSX Edge Nodes. -NSX Edge Nodes provide the bridge between the virtual network environment implemented using NSX and the physical network. Edge Nodes for <%= vars.product_short %> run load balancers for <%= vars.control_plane %> traffic, Kubernetes load balancer services, and ingress controllers. See [Load Balancers in <%= vars.product_short %>](./about-lb.html) for more information. +NSX Edge Nodes provide the bridge between the virtual network environment implemented using NSX and the physical network. Edge Nodes for Tanzu Kubernetes Grid Integrated Edition run load balancers for TKGI API traffic, Kubernetes load balancer services, and ingress controllers. See [Load Balancers in Tanzu Kubernetes Grid Integrated Edition](./about-lb.html) for more information. -In NSX, a load balancer is deployed on the Edge Nodes as a virtual server. The following virtual servers are required for <%= vars.product_short %>: +In NSX, a load balancer is deployed on the Edge Nodes as a virtual server. The following virtual servers are required for Tanzu Kubernetes Grid Integrated Edition: - 1 TCP Layer 4 virtual server for each Kubernetes service of type:`LoadBalancer` - 2 Layer 7 global virtual servers for Kubernetes pod ingress resources (HTTP and HTTPS) -- 1 global virtual server for the <%= vars.control_plane %> +- 1 global virtual server for the TKGI API The number of virtual servers that can be run depends on the size of the load balancer which depends on the size of the Edge Node. The default size of the load balancer deployed by NSX for a Kubernetes cluster is `small`. -<%= vars.product_short %> supports only the `medium`, `large` and larger VM Edge Node form factors and the bare metal Edge Node. +Tanzu Kubernetes Grid Integrated Edition supports only the `medium`, `large` and larger VM Edge Node form factors and the bare metal Edge Node. Customize the size of the load balancer using Network Profiles. For this installation, we use the Large VM form factor for the Edge Node. See [VMware Configuration Maximums](https://configmax.vmware.com/guest?vmwareproduct=VMware%20NSX&release=NSX%20Data%20Center%203.0.0&categories=17-0) for more information. diff --git a/nsxt-install-managers.html.md.erb b/nsxt-install-managers.html.md.erb index fdc2abe61..93812532f 100644 --- a/nsxt-install-managers.html.md.erb +++ b/nsxt-install-managers.html.md.erb @@ -3,7 +3,7 @@ title: Installing and Configuring VMware NSX Managers owner: TKGI-NSXT --- -This topic describes how to install and configure NSX Managers on vSphere in a clustered arrangement for high-availability for <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>). +This topic describes how to install and configure NSX Managers on vSphere in a clustered arrangement for high-availability for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI). ## Prerequisites diff --git a/nsxt-install-objects-k8s.html.md.erb b/nsxt-install-objects-k8s.html.md.erb index da8a8e574..1baa53b8b 100644 --- a/nsxt-install-objects-k8s.html.md.erb +++ b/nsxt-install-objects-k8s.html.md.erb @@ -3,7 +3,7 @@ title: Create the VMware NSX Objects for Kubernetes Clusters Provisioned by TKGI owner: TKGI-NSXT --- -This topic describes how to create NSX objects for the <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) control plane where Kubernetes clusters run. +This topic describes how to create NSX objects for the VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) control plane where Kubernetes clusters run. ## Prerequisites @@ -36,7 +36,7 @@ Before completing this section, make sure you have completed the following secti -## Required NSX Objects for the <%= vars.product_short %> Control Plane +## Required NSX Objects for the Tanzu Kubernetes Grid Integrated Edition Control Plane To install TKGI on vSphere with NSX, you need to create the following NSX objects: diff --git a/nsxt-install-objects-mgmt.html.md.erb b/nsxt-install-objects-mgmt.html.md.erb index 9d86519b5..8d2e1838f 100644 --- a/nsxt-install-objects-mgmt.html.md.erb +++ b/nsxt-install-objects-mgmt.html.md.erb @@ -3,7 +3,7 @@ title: Create VMware NSX Objects for the Management Plane owner: TKGI-NSXT --- -This topic describes how to create NSX objects for the <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) Management Plane. +This topic describes how to create NSX objects for the VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) Management Plane. ## Prerequisites @@ -41,7 +41,7 @@ Before completing this section, make sure you have completed the following secti ## Create Management Plane -Networking for the <%= vars.k8s_runtime_abbr %> Management Plane consists of a [Tier-1 Router and Switch](#nsxt30-t1-router) with [NAT Rules](#nsxt30-t0-nat) for the Management Plane VMs. +Networking for the TKGI Management Plane consists of a [Tier-1 Router and Switch](#nsxt30-t1-router) with [NAT Rules](#nsxt30-t0-nat) for the Management Plane VMs. ### Create Tier-1 Router and Switch diff --git a/nsxt-install-password.html.md.erb b/nsxt-install-password.html.md.erb index ad584d3f7..66f7aa4a0 100644 --- a/nsxt-install-password.html.md.erb +++ b/nsxt-install-password.html.md.erb @@ -3,7 +3,7 @@ title: Configure VMware NSX Passwords owner: TKGI-NSXT --- -This topic describes how to configure NSX passwords after you have installed NSX for <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>). +This topic describes how to configure NSX passwords after you have installed NSX for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI). ## Prerequisites @@ -46,7 +46,7 @@ Before completing this section, make sure you have completed the following secti The default NSX password expiration interval is 90 days. After this period, the NSX passwords will expire on all NSX Manager Nodes and all NSX Edge Nodes. To avoid this, you can extend or remove the password expiration interval, or change the password if needed. -Note: For existing <%= vars.product_short %> deployments, anytime the NSX password is changed you must update the BOSH and PKS tiles with the new passwords. See Adding Infrastructure Password Changes to the <%= vars.product_short %> Tile for more information.
+Note: For existing Tanzu Kubernetes Grid Integrated Edition deployments, anytime the NSX password is changed you must update the BOSH and PKS tiles with the new passwords. See Adding Infrastructure Password Changes to the Tanzu Kubernetes Grid Integrated Edition Tile for more information.
### Update the NSX Manager Password and Password Interval diff --git a/nsxt-install-prereqs.html.md.erb b/nsxt-install-prereqs.html.md.erb index a3ef3afbf..47e114dc1 100644 --- a/nsxt-install-prereqs.html.md.erb +++ b/nsxt-install-prereqs.html.md.erb @@ -3,12 +3,12 @@ title: Prerequisites for Installing and Configuring VMware NSX v3 for TKGI owner: TKGI-NSXT --- -This topic describes the prerequisites for installing and configuring NSX Data Center v3 for <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) on vSphere. +This topic describes the prerequisites for installing and configuring NSX Data Center v3 for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) on vSphere. ## Prerequisites for Installing VMware NSX -To perform a new installation of VMware NSX for <%= vars.product_short %>, complete the following steps in the order presented. +To perform a new installation of VMware NSX for Tanzu Kubernetes Grid Integrated Edition, complete the following steps in the order presented. -1. Read the [Release Notes](./release-notes.html) for the target <%= vars.k8s_runtime_abbr %> version you are installing and verify NSX v3 support. +1. Read the [Release Notes](./release-notes.html) for the target TKGI version you are installing and verify NSX v3 support. -1. Read the topics in the [Preparing to Install <%= vars.product_short %> on vSphere with VMware NSX](./vsphere-nsxt-index-prepare.html) section of the documentation. +1. Read the topics in the [Preparing to Install Tanzu Kubernetes Grid Integrated Edition on vSphere with VMware NSX](./vsphere-nsxt-index-prepare.html) section of the documentation. diff --git a/nsxt-install-tls-certs.html.md.erb b/nsxt-install-tls-certs.html.md.erb index e03bfe27d..310553f36 100644 --- a/nsxt-install-tls-certs.html.md.erb +++ b/nsxt-install-tls-certs.html.md.erb @@ -3,7 +3,7 @@ title: Generate and Register the NSX Management TLS Certificate and Private Key owner: TKGI-NSXT --- -This topic describes how to install and configure an NSX Data Center v3 Management TLS Certificate for use with <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) on vSphere. +This topic describes how to install and configure an NSX Data Center v3 Management TLS Certificate for use with VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) on vSphere. ## Prerequisites @@ -156,7 +156,7 @@ To register the imported VIP certificate with the NSX Management Cluster Certifi } ``` -1. (Optional) If you are running <%= vars.k8s_runtime_abbr %> in a test environment and you are not using a multi-node NSX Management cluster, +1. (Optional) If you are running TKGI in a test environment and you are not using a multi-node NSX Management cluster, then you must also post the certificate to the Nodes API. ``` diff --git a/nsxt-install-transports.html.md.erb b/nsxt-install-transports.html.md.erb index 113e7318f..f367aea49 100644 --- a/nsxt-install-transports.html.md.erb +++ b/nsxt-install-transports.html.md.erb @@ -3,7 +3,7 @@ title: Installing and Configuring NSX Transport Nodes owner: TKGI-NSXT --- -This topic describes how to install and configure NSX Data Center v3.0 for use with <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) on vSphere. +This topic describes how to install and configure NSX Data Center v3.0 for use with VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) on vSphere. ## Prerequisites @@ -35,9 +35,9 @@ Before completing this section, make sure you have completed the following secti -## Prerequisites for Installing NSX-T Data Center v3.0 for <%= vars.product_short %> +## Prerequisites for Installing NSX-T Data Center v3.0 for Tanzu Kubernetes Grid Integrated Edition -To perform a new installation of VMware NSX for <%= vars.product_short %>, complete the following steps in the order presented. +To perform a new installation of VMware NSX for Tanzu Kubernetes Grid Integrated Edition, complete the following steps in the order presented. ## Deploy ESXi Host Transport Nodes Using VDS diff --git a/nsxt-install-tzs.html.md.erb b/nsxt-install-tzs.html.md.erb index 3bffd2f85..7d495950d 100644 --- a/nsxt-install-tzs.html.md.erb +++ b/nsxt-install-tzs.html.md.erb @@ -3,7 +3,7 @@ title: Configuring VMware NSX v3 Transport Zones and Edge Node Switches for Tanz owner: TKGI-NSXT --- -This topic describes how to configure NSX Data Center v3 Transport Zones and N-VDS switches on NSX Edge Nodes for use with <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) on vSphere. +This topic describes how to configure NSX Data Center v3 Transport Zones and N-VDS switches on NSX Edge Nodes for use with VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) on vSphere. ## Prerequisites @@ -26,11 +26,11 @@ Before completing this section, make sure you have completed the following secti ## Overview of Transport Zones for NSX -<%= vars.k8s_runtime_abbr %> requires two Transport Zones for <%= vars.k8s_runtime_abbr %>: an Overlay Transport Zone for the ESXi Transport Nodes +TKGI requires two Transport Zones for TKGI: an Overlay Transport Zone for the ESXi Transport Nodes and a VLAN Transport Zone for Edge Nodes. -<%= vars.k8s_runtime_abbr %> requires that the host switch name associated with the Transport Zones -match exactly the **Edge Switch Name** value that you specify when you configure an NSX Edge Node for use with <%= vars.k8s_runtime_abbr %>. +TKGI requires that the host switch name associated with the Transport Zones +match exactly the **Edge Switch Name** value that you specify when you configure an NSX Edge Node for use with TKGI. You can configure your Transport Zones in three ways. The three configuration options require different levels of customization to complete: @@ -63,11 +63,11 @@ The three configuration options require different levels of customization to comNote: In NSX 3.1 and later, the Transport Zone Host Switch Name has been deprecated and removed from the NSX configuration UI. - For more information, see <%= vars.k8s_runtime_abbr %> NSX Edge Switch and Transport Zone Host Switch Name Requirements.
+ For more information, see TKGI NSX Edge Switch and Transport Zone Host Switch Name Requirements. -## Configure Your NSX Transport Zones for <%= vars.k8s_runtime_abbr %> +## Configure Your NSX Transport Zones for TKGI -<%= vars.k8s_runtime_abbr %> requires the NSX **Edge Switch Name** and the Transport Zone host switch name to be identical. +TKGI requires the NSX **Edge Switch Name** and the Transport Zone host switch name to be identical. You can configure identical Edge Switch and Transport Zone host switch names using the following methods: * [Option 1: Use the Default Transport Zones with a Single N-VDS Switch](#option1) (recommended) @@ -118,7 +118,7 @@ To use this option:Note: If you use the default Transport Zones, but do not use the exact name nsxHostSwitch
when configuring NSX on the Edge Node, you will receive the pks-nsx-t-osb-proxy
BOSH error when you try to deploy <%= vars.k8s_runtime_abbr %>.
Note: If you use the default Transport Zones, but do not use the exact name nsxHostSwitch
when configuring NSX on the Edge Node, you will receive the pks-nsx-t-osb-proxy
BOSH error when you try to deploy TKGI.
Note: The NSX 3.x Edge Node configuration displays the following message beside the Edge Switch Name field: "The switch name value need not be identical to host switch name associated with the Transport Zone." - This message does not apply to <%= vars.k8s_runtime_abbr %>.
+ This message does not apply to TKGI. If there is a mismatch between the the host switch name associated with the Transport Zone and the **Edge Switch Name**, -<%= vars.k8s_runtime_abbr %> installation fails with the following error: +TKGI installation fails with the following error: ``` Failed to get NSX provisioning properties: No transport zone with overlay type found in transport node as switch name is not same across the TZ and ESXI TN diff --git a/nsxt-install-vsphere-net.html.md.erb b/nsxt-install-vsphere-net.html.md.erb index 8df1faada..bd6f12636 100644 --- a/nsxt-install-vsphere-net.html.md.erb +++ b/nsxt-install-vsphere-net.html.md.erb @@ -3,7 +3,7 @@ title: Configure vSphere Networking for ESXi Hosts owner: TKGI-NSXT --- -This topic describes how to configure vSphere Networking for ESXi Hosts for <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>). +This topic describes how to configure vSphere Networking for ESXi Hosts for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI). ## Prerequisites diff --git a/nsxt-install-vtep.html.md.erb b/nsxt-install-vtep.html.md.erb index ec4c4fe3b..92d03f9c4 100644 --- a/nsxt-install-vtep.html.md.erb +++ b/nsxt-install-vtep.html.md.erb @@ -3,7 +3,7 @@ title: Create an IP Pool for VTEP owner: TKGI-NSXT --- -This topic describes how to install and configure NSX Data Center v3.0 for use with <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) on vSphere. +This topic describes how to install and configure NSX Data Center v3.0 for use with VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) on vSphere. ## Create an IP Pool for VTEP diff --git a/nsxt-lb-tkgi-api.html.md.erb b/nsxt-lb-tkgi-api.html.md.erb index d6a9ba2e5..769abcc2b 100644 --- a/nsxt-lb-tkgi-api.html.md.erb +++ b/nsxt-lb-tkgi-api.html.md.erb @@ -3,11 +3,11 @@ title: Provisioning a VMware NSX Load Balancer for the TKGI API Server owner: PKS-NSXT --- -This topic describes how to deploy an NSX load balancer for the <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) API Server. +This topic describes how to deploy an NSX load balancer for the VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) API Server. ## About the NSX Load Balancer for the TKGI API Server -If you deploy <%= vars.product_short %> on vSphere with NSX with the <%= vars.control_plane %> in high-availability mode, you must configure an NSX load balancer for the <%= vars.control_plane %> traffic. For more information, see [Load Balancers in Tanzu Kubernetes Grid Integrated Edition Deployments on vSphere with NSX‑T](./about-lb.html#with-nsx-t). +If you deploy Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX with the TKGI API in high-availability mode, you must configure an NSX load balancer for the TKGI API traffic. For more information, see [Load Balancers in Tanzu Kubernetes Grid Integrated Edition Deployments on vSphere with NSX‑T](./about-lb.html#with-nsx-t). To provision an NSX load balancer for the TKGI API Server VM, complete the following steps. @@ -18,7 +18,7 @@ If you are using a Dynamic Server Pool, create an NSGroup as described in this s 1. Log in to an NSX Manager Node.Note: You can connect to any NSX Manager Node in the management cluster to provision the load balancer.
1. Select the **Advanced Networking & Security** tab. -Note: You must use the Advanced Networking and Security tab in NSX Manager to create, read, update, and delete all NSX networking objects used for <%= vars.product_short %>.
+Note: You must use the Advanced Networking and Security tab in NSX Manager to create, read, update, and delete all NSX networking objects used for Tanzu Kubernetes Grid Integrated Edition.
1. Select **Inventory > Groups**.Note: You can connect to any NSX Manager Node in the management cluster to provision the load balancer.
1. Select the **Advanced Networking & Security** tab. -Note: You must use the Advanced Networking and Security tab in NSX Manager to create, read, update, and delete all NSX networking objects used for <%= vars.product_short %>.
+Note: You must use the Advanced Networking and Security tab in NSX Manager to create, read, update, and delete all NSX networking objects used for Tanzu Kubernetes Grid Integrated Edition.
### Step 2: Configure a Logical Switch @@ -221,7 +221,7 @@ At the Health Monitors screen, specify the Active Health Monitor you just create ### Step 12: Create SNAT Rule -If your <%= vars.product_short %> deployment uses NAT mode, make sure Health Monitoring traffic is correctly SNAT-translated when leaving the NSX topology. Add a specific SNAT rule that intercepts HM traffic generated by the load balancer and translates this to a globally-routable IP Address allocated using the same principle of the load balancer VIP. The following screenshot illustrates an example of SNAT rule added to the Tier0 Router to enable HM SNAT translation. In the example, `100.64.128.0/31` is the subnet for the Load Balancer Tier-1 uplink interface. +If your Tanzu Kubernetes Grid Integrated Edition deployment uses NAT mode, make sure Health Monitoring traffic is correctly SNAT-translated when leaving the NSX topology. Add a specific SNAT rule that intercepts HM traffic generated by the load balancer and translates this to a globally-routable IP Address allocated using the same principle of the load balancer VIP. The following screenshot illustrates an example of SNAT rule added to the Tier0 Router to enable HM SNAT translation. In the example, `100.64.128.0/31` is the subnet for the Load Balancer Tier-1 uplink interface. To do this you need to retrieve the IP of the T1 uplink (Tier-1 Router that connected the NSX LB instance). In the example below, the T1 uplink IP is `100.64.112.37/31`. diff --git a/nsxt-multi-t0.html.md.erb b/nsxt-multi-t0.html.md.erb index c717cf818..8fe6b7f7b 100644 --- a/nsxt-multi-t0.html.md.erb +++ b/nsxt-multi-t0.html.md.erb @@ -3,7 +3,7 @@ title: Isolating Tenants owner: TKGI --- -This topic describes how to isolate tenants in <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) multi-tenant environments. +This topic describes how to isolate tenants in VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) multi-tenant environments.Note: An Edge Cluster can have a maximum of 10 Edge Nodes. If the provisioning requires more Edge Nodes than what a single Edge Cluster can support, multiple Edge Clusters must be deployed.
@@ -143,11 +140,11 @@ To define a logical switch based on an Overlay or VLAN transport zone, follow th 1. In NSX Manager, go to **Networking** > **Switching** > **Switches**. 1. Click **Add** and create a logical switch (LS). 1. Name the switch descriptively, such as `inter-t0-logical-switch`. -1. Connect the logical switch to the transport zone defined when deploying NSX. See [Installing and Configuring NSX-T Data Center v3.0 for <%= vars.k8s_runtime_abbr %>](./nsxt-3-0-install.html). +1. Connect the logical switch to the transport zone defined when deploying NSX. See [Installing and Configuring NSX-T Data Center v3.0 for TKGI](./nsxt-3-0-install.html). ### Step 3: Configure a New Uplink Interface on the Shared Tier-0 Router -The Shared Tier-0 router already has an uplink interface to the external (physical) network that was configured when it was created. For more information, see [Installing and Configuring NSX-T Data Center v3.0 for <%= vars.k8s_runtime_abbr %>](./nsxt-3-0-install.html). +The Shared Tier-0 router already has an uplink interface to the external (physical) network that was configured when it was created. For more information, see [Installing and Configuring NSX-T Data Center v3.0 for TKGI](./nsxt-3-0-install.html). To enable Multi-T0, you must configure a second uplink interface on the Shared Tier-0 router that connects to the inter-T0 network (`inter-t0-logical-switch`, for example). To do this, complete the following steps: @@ -160,7 +157,7 @@ To enable Multi-T0, you must configure a second uplink interface on the Shared T ### Step 4: Provision Tier-0 Router for Each Tenant -Create a Tier-0 logical router for each tenant you want to isolate. For more information, see [Create Tier-0 Router](./nsxt-3-0-install.html#nsxt30-t0-router-create) in _Installing and Configuring NSX-T Data Center v3.0 for <%= vars.k8s_runtime_abbr %>_. +Create a Tier-0 logical router for each tenant you want to isolate. For more information, see [Create Tier-0 Router](./nsxt-3-0-install.html#nsxt30-t0-router-create) in _Installing and Configuring NSX-T Data Center v3.0 for TKGI_. When creating each Tenant Tier-0 router, make sure you set the router to be active/passive, and be sure to name the logical switch descriptively, such as `t0-router-customer-A`. @@ -172,7 +169,7 @@ Similar to the Shared Tier-0 router, each Tenant Tier-0 router requires at a min - The second uplink interface provides an uplink connection to the Inter-T0 logical switch that you configured. For example, `inter-t0-logical-switch`. -For instructions, see [Create Tier-0 Router](./nsxt-3-0-install.html#nsxt30-t0-router-create) in _Installing and Configuring NSX-T Data Center v3.0 for <%= vars.k8s_runtime_abbr %>_. When creating the uplink interface that provides an uplink connection to the Inter-T0 logical switch, be sure to give this uplink interface an IP address from the allocated pool of IP addresses. +For instructions, see [Create Tier-0 Router](./nsxt-3-0-install.html#nsxt30-t0-router-create) in _Installing and Configuring NSX-T Data Center v3.0 for TKGI_. When creating the uplink interface that provides an uplink connection to the Inter-T0 logical switch, be sure to give this uplink interface an IP address from the allocated pool of IP addresses. ### Step 6: Verify the Status of the Shared and Tenant Tier-0 Routers @@ -190,7 +187,7 @@ Similarly, the Tenant Tier-0 has one uplink interface at `10.40.206.13/25` on th To configure static routes: -1. For each T0 router, including the Shared Tier-0 and all Tenant Tier-0 routers, define a static route to the external network. For instructions, see [Create Tier-0 Router](./nsxt-3-0-install.html#nsxt30-t0-router-create) in _Installing and Configuring NSX-T Data Center v3.0 for <%= vars.k8s_runtime_abbr %>_. +1. For each T0 router, including the Shared Tier-0 and all Tenant Tier-0 routers, define a static route to the external network. For instructions, see [Create Tier-0 Router](./nsxt-3-0-install.html#nsxt30-t0-router-create) in _Installing and Configuring NSX-T Data Center v3.0 for TKGI_. 1. For the Shared Tier-0 router, the default static route points to the external management components such as vCenter and NSX Manager and provides internet connectivity. @@ -206,25 +203,25 @@ To configure static routes: ### Step 8: Considerations for NAT Topology on Shared Tier-0 -The Multi-T0 configuration steps documented here apply to deployments where NAT mode is **not** used on the Shared Tier-0 router. For more information, see NSX Deployment Topologies for <%= vars.product_short %>. +The Multi-T0 configuration steps documented here apply to deployments where NAT mode is **not** used on the Shared Tier-0 router. For more information, see NSX Deployment Topologies for Tanzu Kubernetes Grid Integrated Edition. For deployments where NAT-mode is used on the Shared Tier-0 router, additional provisioning steps must be followed to preserve NAT functionality to external networks while bypassing NAT rules for traffic flowing from the Shared Tier-0 router to each Tenant Tier-0 router. -Existing <%= vars.product_short %> deployments where NAT mode is configured on the Shared Tier-0 router cannot be re-purposed to support a Multi-T0 deployment following this documentation. +Existing Tanzu Kubernetes Grid Integrated Edition deployments where NAT mode is configured on the Shared Tier-0 router cannot be re-purposed to support a Multi-T0 deployment following this documentation. ### Step 9: Considerations for NAT Topology on Tenant Tier-0 -Note: This step only applies to NAT topologies on the Tenant Tier-0 router. For more information on NAT mode, see NSX Deployment Topologies for <%= vars.k8s_runtime_abbr %>.
+Note: This step only applies to NAT topologies on the Tenant Tier-0 router. For more information on NAT mode, see NSX Deployment Topologies for TKGI.
Note: NAT mode for Tenant Tier-0 routers is enabled by defining a non-routable custom Pods IP Block using a Network Profile. For more information, see Defining Network Profiles.
-In a Multi-T0 environment with NAT mode, traffic on the Tenant Tier-0 network going from Kubernetes cluster nodes to <%= vars.k8s_runtime_abbr %> management components residing on the Shared Tier-0 router must bypass NAT rules. This is required because TKGI-managed components such as BOSH Director connect to Kubernetes nodes based on routable connectivity without NAT. +In a Multi-T0 environment with NAT mode, traffic on the Tenant Tier-0 network going from Kubernetes cluster nodes to TKGI management components residing on the Shared Tier-0 router must bypass NAT rules. This is required because TKGI-managed components such as BOSH Director connect to Kubernetes nodes based on routable connectivity without NAT. -To avoid NAT rules being applied to this class of traffic, you need to create two high-priority **NO_SNAT** rules on each Tenant Tier-0 router. These NO_SNAT rules allow "selective" bypass of NAT for the relevant class of traffic, which in this case is connectivity from Kubernetes node networks to <%= vars.k8s_runtime_abbr %> management components such as the <%= vars.control_plane %>, Ops Manager, and BOSH Director, as well as to infrastructure components such as vCenter and NSX Manager. +To avoid NAT rules being applied to this class of traffic, you need to create two high-priority **NO_SNAT** rules on each Tenant Tier-0 router. These NO_SNAT rules allow "selective" bypass of NAT for the relevant class of traffic, which in this case is connectivity from Kubernetes node networks to TKGI management components such as the TKGI API, Ops Manager, and BOSH Director, as well as to infrastructure components such as vCenter and NSX Manager. -For each Tenant Tier-0 router, define two NO_SNAT rules to classify traffic. The source for both rules is the [Nodes IP Block](./nsxt-prepare-env.html#plan-ip-blocks) CIDR. The destination for one rule is the <%= vars.k8s_runtime_abbr %> Management network where <%= vars.k8s_runtime_abbr %>, Ops Manager, and BOSH Director are deployed. The destination for the other rule is the external network where NSX Manager and vCenter are deployed. +For each Tenant Tier-0 router, define two NO_SNAT rules to classify traffic. The source for both rules is the [Nodes IP Block](./nsxt-prepare-env.html#plan-ip-blocks) CIDR. The destination for one rule is the TKGI Management network where TKGI, Ops Manager, and BOSH Director are deployed. The destination for the other rule is the external network where NSX Manager and vCenter are deployed. -For example, the following image shows two NO_SNAT rules created on a Tenant Tier-0 router. The first rule un-NATs traffic from Kubernetes nodes (`30.0.128.0/17`) to the <%= vars.k8s_runtime_abbr %> management network (`30.0.0.0/24`). The second rule un-NATs traffic from Kubernetes nodes (`30.0.128.0/17`) to the external network (`192.168.201.0/24`). +For example, the following image shows two NO_SNAT rules created on a Tenant Tier-0 router. The first rule un-NATs traffic from Kubernetes nodes (`30.0.128.0/17`) to the TKGI management network (`30.0.0.0/24`). The second rule un-NATs traffic from Kubernetes nodes (`30.0.128.0/17`) to the external network (`192.168.201.0/24`).  @@ -257,7 +254,7 @@ In a Multi-T0 deployment, special consideration must be given to the network des Failover of a logical router is triggered when the router is losing all of its BGP sessions. If multiple BGP sessions are established across different uplink interfaces of a Tier-0 router, failover will only occur if **all** such sessions are lost. Thus, to ensure high availability on the Shared and Tenant Tier-0 routers, BGP can only be configured on uplink interfaces facing the Inter-Tier-0 network. This configuration is shown in the diagram below. -Note: In a Multi-T0 deployment, BGP cannot be configured on external uplink interfaces. Uplink external connectivity must use VIP-HA with NSX to provide high availability for external interfaces. For more information, see Deploy NSX Edge Nodes in Installing and Configuring NSX-T Data Center v3.0 for <%= vars.k8s_runtime_abbr %>.
+Note: In a Multi-T0 deployment, BGP cannot be configured on external uplink interfaces. Uplink external connectivity must use VIP-HA with NSX to provide high availability for external interfaces. For more information, see Deploy NSX Edge Nodes in Installing and Configuring NSX-T Data Center v3.0 for TKGI.
 @@ -325,7 +322,7 @@ To configure BGP peering for each Tenant Tier-0 router, follow the steps below: ### Step 11: Configure BGP on the Shared Tier-0 Router -The configuration of BGP on the Shared Tier-0 is similar to the BGP configuration each Tenant Tier-0, with the exception of the IP Prefix list that permits traffic to the <%= vars.k8s_runtime_abbr %> management network where <%= vars.k8s_runtime_abbr %>, BOSH, and Ops Manager are located. +The configuration of BGP on the Shared Tier-0 is similar to the BGP configuration each Tenant Tier-0, with the exception of the IP Prefix list that permits traffic to the TKGI management network where TKGI, BOSH, and Ops Manager are located. As with each Tenant Tier-0 router, you will need to assign a unique private AS number within the private range `64512-65534` to the Shared Tier-0 router. Once the AS number is assigned, use NSX Manager to configure the following BGP rules for the Shared Tier-0 router. @@ -346,7 +343,7 @@ To configure IP prefix lists for each Tenant Tier-0 router, follow the steps bel 1. Click **Add** and configure as follows: 1. **Name**: Enter a descriptive name. 1. Click **Add** and create a **Permit** rule for the infrastructure components vCenter and NSX Manager. - 1. Click **Add** and create a **Permit** rule for the <%= vars.k8s_runtime_abbr %> management components (<%= vars.k8s_runtime_abbr %>, Ops Manager, and BOSH). + 1. Click **Add** and create a **Permit** rule for the TKGI management components (TKGI, Ops Manager, and BOSH). 1. Click **Add** and create a **Deny** rule that denies everything else on the network `0.0.0.0/0`.  @@ -360,7 +357,7 @@ To configure IP prefix lists for each Tenant Tier-0 router, follow the steps bel 1. **Address Families**: Click **Add** and configure as follows: 1. **Type**: IPV4_UNICAST 1. **State**: Enabled - 1. **Out Filter**: Select the IP Prefix List that includes the network where vCenter and NSX Manager are deployed, as well as the network where the <%= vars.k8s_runtime_abbr %> management plane is deployed. + 1. **Out Filter**: Select the IP Prefix List that includes the network where vCenter and NSX Manager are deployed, as well as the network where the TKGI management plane is deployed. 1. Click **Add**. 1. Back at the **Routing** > **BGP** screen: 1. Enter the Tenant Tier-0 AS number. @@ -405,13 +402,13 @@ To verify BGP Peering: 1. Repeat for all other Tenant Tier-0 routers. Verify that the T0 routing table for each Tenant Tier-0 includes all BGP routes to reach vCenter, -NSX Manager, and the <%= vars.k8s_runtime_abbr %> management network: +NSX Manager, and the TKGI management network: 1. In NSX Manager, select **Networking** > **Routers** > **Routing**. 1. Select the T0 router and choose **Actions** > **Download Routing Table**. 1. Download the routing table for each of the Tenant Tier-0 routers. -Note: At this point, the Shared Tier-0 has no BGP routes because you have not deployed any Kubernetes clusters. The Shared Tier-0 will show BGP routes when you deploy Kubernetes clusters to the Tenant Tier-0 routers. Each Tenant Tier-0 router shows a BGP exported route that makes each Tenant Tier-0 router aware of the <%= vars.k8s_runtime_abbr %> management network and other external networks where NSX and vCenter are deployed.
+Note: At this point, the Shared Tier-0 has no BGP routes because you have not deployed any Kubernetes clusters. The Shared Tier-0 will show BGP routes when you deploy Kubernetes clusters to the Tenant Tier-0 routers. Each Tenant Tier-0 router shows a BGP exported route that makes each Tenant Tier-0 router aware of the TKGI management network and other external networks where NSX and vCenter are deployed.
Note: These are the minimum IP Sets you need to create. You might want to define additional IP Sets for convenience.
@@ -486,7 +483,7 @@ Select the Edge Firewall **Section** you just created, then select **Add Rule**. * [BGP Firewall Rule](#bgp-firewall-rule) * [Clusters Masters Firewall Rule](#masters-firewall-rule) * [Node Network to Management Firewall Rule](#nodes-firewall-rule) -* [<%= vars.k8s_runtime_abbr %> Firewall Rule](#tkgi-firewall-rule) +* [TKGI Firewall Rule](#tkgi-firewall-rule) * [Deny All Firewall Rule](#deny-all-firewall-rule) @@ -518,24 +515,24 @@ Once you have defined the NSGroup, configure the firewall rule as follows. ##### Node Network to Management Firewall Rule -This firewall rule allows Kubernetes node traffic to reach <%= vars.k8s_runtime_abbr %> management VMs and the standard network. +This firewall rule allows Kubernetes node traffic to reach TKGI management VMs and the standard network. - **Name**: `Node-Network-to-Management` - **Direction**: out - **Source**: IP Set defined for the Nodes IP Block network -- **Destination**: IP Sets defined for vCenter, NSX Manager, and <%= vars.k8s_runtime_abbr %> management plane components +- **Destination**: IP Sets defined for vCenter, NSX Manager, and TKGI management plane components - **Service**: Any - **Action**: Allow - Apply the rule to the Inter-T0-Uplink interface. - Save the firewall rule. -##### <%= vars.k8s_runtime_abbr %> Firewall Rule +##### TKGI Firewall Rule -This firewall rule allows <%= vars.k8s_runtime_abbr %> management plane components to talk to Kubernetes nodes. +This firewall rule allows TKGI management plane components to talk to Kubernetes nodes. - **Name**: `TKGI-to-Node-Network` - **Direction**: ingress -- **Source**: IP Set defined for the <%= vars.k8s_runtime_abbr %> management network +- **Source**: IP Set defined for the TKGI management network - **Destination**: IP Set defined for the Nodes IP Block network - **Service**: Any - **Action**: Allow @@ -581,20 +578,20 @@ Those rules will apply to any cluster created after you define the DFW section f ### Secure Intra-Tenant Communications To secure communication between clusters in the same tenancy, you must disallow any form of communication between -Kubernetes clusters created by <%= vars.k8s_runtime_abbr %>. +Kubernetes clusters created by TKGI. Securing inter-cluster communications is achieved by provisioning security groups and DFW rules.Note: You must perform the global procedures, the first three steps described below, before you deploy a Kubernetes cluster to the target tenant Tier-0 router.
To secure communication between clusters in the same tenancy: -1. [Create NSGroup for All <%= vars.product_short %> Clusters](#ns-group) +1. [Create NSGroup for All Tanzu Kubernetes Grid Integrated Edition Clusters](#ns-group) 1. [Create DFW Section](#dfw-section) 1. [Create NSGroups](#ns-groups) 1. [Create DFW Rules](#dfw-rules) -#### Step 1: Create NSGroup for All <%= vars.product_short %> Clusters +#### Step 1: Create NSGroup for All Tanzu Kubernetes Grid Integrated Edition Clusters 1. In NSX Manager, navigate to **Inventory > Groups > Groups** and **Add new group**. 1. Configure the new NSGroup as follows: @@ -624,7 +621,7 @@ To create a DFW section, follow the instructions in [Create DFW Section](#dfw-se Before creating NSGroups, retrieve the UUID of the cluster that you want to secure. To retrieve the cluster UUID, run the `tkgi cluster YOUR-CLUSTER-NAME` command. -For more information about the <%= vars.k8s_runtime_abbr %> CLI, see [<%= vars.k8s_runtime_abbr %> CLI](./cli/index.html). +For more information about the TKGI CLI, see [TKGI CLI](./cli/index.html). ##### Create NSGroup for Cluster Nodes @@ -739,9 +736,6 @@ To isolate a cluster and its workloads behind a VRF gateway: * [Create VRF Gateways](#create-gateway-vrf) * [Create a Network Profile](#create-network-profile-vrf) * [Configure a Cluster with a VRF Gateway](#update-cluster-vrf) -<% if vars.product_version == "COMMENTED" %> -* [Configure VRF Gateway Security](#security-config-vrf) -<% end %>Warning:
The NSX Policy API feature is available at only 50% of NSX Management Plane API scale with VMware NSX v4.0.1.1.
@@ -17,7 +17,7 @@ This topic provides considerations for using the NSX Policy API with <%= vars.pr
The NSX Policy API is the next-generation interface for integrating with the NSX networking and security framework.
-In addition to supporting the NSX Management API, <%= vars.k8s_runtime_abbr %> supports using the NSX Policy API to deploy <%= vars.product_short %> on vSphere.
+In addition to supporting the NSX Management API, TKGI supports using the NSX Policy API to deploy Tanzu Kubernetes Grid Integrated Edition on vSphere.
If you are planning on using the NSX Policy API, keep in mind that only new deployments of TKGI are supported. You cannot configure an existing installation of TKGI to use the NSX Policy API.
@@ -29,9 +29,9 @@ To use the NSX Policy API with your TKGI installation, you must use a supported
## NSX Deployment Topologies
-<%= vars.product_short %> on vSphere with NSX supports several [deployment topologies](./nsxt-topologies.html).
+Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX supports several [deployment topologies](./nsxt-topologies.html).
-Currently <%= vars.product_short %> on vSphere with NSX Policy API supports all network topologies except the [VSS/VDS topology](./nsxt-topologies.html#topology-no-nat-virtual-switch).
+Currently Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX Policy API supports all network topologies except the [VSS/VDS topology](./nsxt-topologies.html#topology-no-nat-virtual-switch).
## NSX Installation
@@ -47,7 +47,7 @@ For specific instructions on creating the required objects, see [Create the NSX
## TKGI Configuration
-When you configure the BOSH Director tile for <%= vars.product_short %>, you must enable the option vCenter Config > NSX Networking > **Use NSX Policy API**. See [Configure NSX Networking](./vsphere-nsxt-om-config.html#vcenter-config).
+When you configure the BOSH Director tile for Tanzu Kubernetes Grid Integrated Edition, you must enable the option vCenter Config > NSX Networking > **Use NSX Policy API**. See [Configure NSX Networking](./vsphere-nsxt-om-config.html#vcenter-config).
Also, when you configure the TKGI tile in Ops Manager, you must enabled Settings > Networking > NSX > **Policy API mode**. See [Configure TKGI Networking](./installing-nsx-t.html#networking).
@@ -57,8 +57,8 @@ If you are using the TKGI Management Console, you need to select the Policy API
## Network Profile
-<%= vars.product_short %> on vSphere with NSX supports the use of [Network Profile](./network-profiles-index.html) for modifying specific NSX settings post-installation. A limited number of network profile use cases are not supported when using TKGI with the NSX Policy API.
+Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX supports the use of [Network Profile](./network-profiles-index.html) for modifying specific NSX settings post-installation. A limited number of network profile use cases are not supported when using TKGI with the NSX Policy API.
-The <%= vars.product_short %> on vSphere with NSX Policy API does not support either the "Top Firewall" or the "Bottom Firewall" DFW Section Markers. For more information, see [DFW Section Markers](./network-profiles-ncp-dfw.html).
+The Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX Policy API does not support either the "Top Firewall" or the "Bottom Firewall" DFW Section Markers. For more information, see [DFW Section Markers](./network-profiles-ncp-dfw.html).
-The <%= vars.product_short %> on vSphere with NSX Policy API does not support [NSGroups](./network-profiles-ns-groups.html) if you create the group in a domain other than the default. With the Policy API, a group must be part of a domain. The `default` domain is supported, and if you create the group using the NSX Policy interface, the group is automatically put in the `default` domain. However, if you use the Policy REST API to create a group in a domain other than the default, it is not supported.
+The Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX Policy API does not support [NSGroups](./network-profiles-ns-groups.html) if you create the group in a domain other than the default. With the Policy API, a group must be part of a domain. The `default` domain is supported, and if you create the group using the NSX Policy interface, the group is automatically put in the `default` domain. However, if you use the Policy REST API to create a group in a domain other than the default, it is not supported.
diff --git a/nsxt-prepare-env.html.md.erb b/nsxt-prepare-env.html.md.erb
index 0d2fdb71f..8e06f37d9 100644
--- a/nsxt-prepare-env.html.md.erb
+++ b/nsxt-prepare-env.html.md.erb
@@ -3,13 +3,13 @@ title: Network Planning for Installing Tanzu Kubernetes Grid Integrated Edition
owner: TKGI
---
-This topic describes how to plan your environment before installing <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) on VMware vSphere with NSX integration.
+This topic describes how to plan your environment before installing VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) on VMware vSphere with NSX integration.
##Overview
-Before installing <%= vars.product_full %> on VMware vSphere with NSX integration, plan your environment as described in the following sections:
+Before installing VMware Tanzu Kubernetes Grid Integrated Edition on VMware vSphere with NSX integration, plan your environment as described in the following sections:
* [Prerequisites](#prerequisites)
* [Understand Component Interactions](#components)
@@ -36,52 +36,52 @@ Familiarize yourself with the following related documentation:
* [Kubernetes documentation](https://kubernetes.io/docs/home/)
* [containerd documentation](https://containerd.io/docs/)
-Review the following <%= vars.product_short %> documentation:
+Review the following Tanzu Kubernetes Grid Integrated Edition documentation:
* [VMware vSphere with NSX Version Requirements](vsphere-nsxt-requirements.html)
-* [Hardware Requirements for <%= vars.product_short %> on VMware vSphere with NSX](vsphere-nsxt-rpd-mpd.html)
+* [Hardware Requirements for Tanzu Kubernetes Grid Integrated Edition on VMware vSphere with NSX](vsphere-nsxt-rpd-mpd.html)
* [VMware Ports and Protocols](https://ports.vmware.com/home/vSphere+NSX-Data-Center-for-vSphere+NSX-Data-Center)
on the VMware site.
-* [Network Objects Created by NSX for <%= vars.product_short %>](./vsphere-nsxt-cluster-objects.html)
+* [Network Objects Created by NSX for Tanzu Kubernetes Grid Integrated Edition](./vsphere-nsxt-cluster-objects.html)
##Understand Component Interactions
-<%= vars.product_short %> on VMware vSphere with NSX requires the following component interactions:
+Tanzu Kubernetes Grid Integrated Edition on VMware vSphere with NSX requires the following component interactions:
* vCenter, NSX Manager Nodes, NSX Edge Nodes, and ESXi hosts must be able to communicate with each other.
* The BOSH Director VM must be able to communicate with vCenter and the NSX Management Cluster.
* The BOSH Director VM must be able to communicate with all nodes in all Kubernetes clusters.
-* Each <%= vars.product_short %>-provisioned Kubernetes cluster deploys the NSX Node Agent and the Kube Proxy that run as BOSH-managed processes on each worker node.
+* Each Tanzu Kubernetes Grid Integrated Edition-provisioned Kubernetes cluster deploys the NSX Node Agent and the Kube Proxy that run as BOSH-managed processes on each worker node.
* NCP runs as a BOSH-managed process on the Kubernetes control plane node. In a multi-control plane node deployment, the NCP process runs on all control plane nodes, but is active only on one control plane node. If the NCP process on an active control plane node is unresponsive, BOSH activates another NCP process.
##Plan Deployment Topology
-Review the [Deployment Topologies](nsxt-topologies.html) for <%= vars.product_short %> on VMware vSphere with NSX. The most common deployment topology is the [NAT topology](./nsxt-topologies.html#topology-nat). Decide which deployment topology you will implement, and plan accordingly.
+Review the [Deployment Topologies](nsxt-topologies.html) for Tanzu Kubernetes Grid Integrated Edition on VMware vSphere with NSX. The most common deployment topology is the [NAT topology](./nsxt-topologies.html#topology-nat). Decide which deployment topology you will implement, and plan accordingly.
##Plan Network CIDRs
-Before you install <%= vars.product_short %> on VMware vSphere with NSX, plan the CIDRs and IP blocks that you are using in your deployment.
+Before you install Tanzu Kubernetes Grid Integrated Edition on VMware vSphere with NSX, plan the CIDRs and IP blocks that you are using in your deployment.
Plan for the following network CIDRs in the IPv4 address space according to the instructions in [VMware NSX documentation](https://docs.vmware.com/en/VMware-NSX/index.html):
* **VTEP CIDRs**: One or more of these networks host your GENEVE Tunnel Endpoints on your NSX Transport Nodes. Size the networks to support all of your expected Host and Edge Transport Nodes. For example, a CIDR of `192.168.1.0/24` provides 254 usable IPs.
-* **<%= vars.k8s_runtime_abbr %> MANAGEMENT CIDR**: This small network is used to access <%= vars.product_short %> management components such as Ops Manager, BOSH Director, and <%= vars.product_short %> VMs as well as the Harbor Registry VM if deployed. For example, a CIDR of `10.172.1.0/28` provides 14 usable IPs. For the [No-NAT deployment topologies](nsxt-topologies.html#topology-no-nat-virtual-switch), this is a corporate routable subnet /28. For the [NAT deployment topology](nsxt-topologies.html#topology-nat), this is a non-routable subnet /28, and DNAT needs to be configured in NSX to access the <%= vars.product_short %> management components.
+* **TKGI MANAGEMENT CIDR**: This small network is used to access Tanzu Kubernetes Grid Integrated Edition management components such as Ops Manager, BOSH Director, and Tanzu Kubernetes Grid Integrated Edition VMs as well as the Harbor Registry VM if deployed. For example, a CIDR of `10.172.1.0/28` provides 14 usable IPs. For the [No-NAT deployment topologies](nsxt-topologies.html#topology-no-nat-virtual-switch), this is a corporate routable subnet /28. For the [NAT deployment topology](nsxt-topologies.html#topology-nat), this is a non-routable subnet /28, and DNAT needs to be configured in NSX to access the Tanzu Kubernetes Grid Integrated Edition management components.
-* **<%= vars.k8s_runtime_abbr %> LB CIDR**: This network provides your load balancing address space for each Kubernetes cluster created by <%= vars.product_short %>. The network also provides IP addresses for Kubernetes API access and Kubernetes exposed services. For example, `10.172.2.0/24` provides 256 usable IPs. This network is used when creating the `ip-pool-vips` described in [Creating VMware NSX Objects for <%= vars.product_short %>](nsxt-create-objects.html), or when the services are deployed. You enter this network in the
-**Floating IP Pool ID** field in the **Networking** pane of the <%= vars.product_tile %> tile.
+* **TKGI LB CIDR**: This network provides your load balancing address space for each Kubernetes cluster created by Tanzu Kubernetes Grid Integrated Edition. The network also provides IP addresses for Kubernetes API access and Kubernetes exposed services. For example, `10.172.2.0/24` provides 256 usable IPs. This network is used when creating the `ip-pool-vips` described in [Creating VMware NSX Objects for Tanzu Kubernetes Grid Integrated Edition](nsxt-create-objects.html), or when the services are deployed. You enter this network in the
+**Floating IP Pool ID** field in the **Networking** pane of the Tanzu Kubernetes Grid Integrated Edition tile.
##Plan IP Blocks
-When you install <%= vars.product_short %> on VMware NSX, you are required to specify the **Pods IP Block ID** and **Nodes IP Block ID** in the **Networking** pane of the <%= vars.product_tile %> tile.
+When you install Tanzu Kubernetes Grid Integrated Edition on VMware NSX, you are required to specify the **Pods IP Block ID** and **Nodes IP Block ID** in the **Networking** pane of the Tanzu Kubernetes Grid Integrated Edition tile.
**Pods IP Block ID** and **Nodes IP Block ID** IDs map to the two IP blocks you must configure in VMware NSX: the Pods IP Block for Kubernetes pods, and the Node IP Block for Kubernetes nodes (VMs).
@@ -92,7 +92,7 @@ To configure **Pods IP Block ID** and **Nodes IP Block ID**:
* [Nodes IP Block](#nodes-ip-block)
* [Reserved IP Blocks](#reserved-ip-blocks)
-For more information, see the [Networking](installing-nsx-t.html#networking) section of _Installing <%= vars.product_short %> on VMware vSphere with NSX Integration_.
+For more information, see the [Networking](installing-nsx-t.html#networking) section of _Installing Tanzu Kubernetes Grid Integrated Edition on VMware vSphere with NSX Integration_.
@@ -102,10 +102,10 @@ For more information, see the [Networking](installing-nsx-t.html#networking) sec
Each time a Kubernetes namespace is created, a subnet from the **Pods IP Block** is allocated. The subnet size carved out from this block is /24, which means a maximum of 256 pods can be created per namespace.
-When a Kubernetes cluster is deployed by <%= vars.product_short %>, by default 3 namespaces are created. Often additional namespaces will be created by operators to facilitate cluster use. As a result, when creating the **Pods IP Block**, you must use a CIDR range larger than /24 to ensure that NSX has enough IP addresses to allocate for all pods. The recommended size is /16. For more information, see [Creating VMware NSX Objects for <%= vars.product_short %>](nsxt-create-objects.html).
+When a Kubernetes cluster is deployed by Tanzu Kubernetes Grid Integrated Edition, by default 3 namespaces are created. Often additional namespaces will be created by operators to facilitate cluster use. As a result, when creating the **Pods IP Block**, you must use a CIDR range larger than /24 to ensure that NSX has enough IP addresses to allocate for all pods. The recommended size is /16. For more information, see [Creating VMware NSX Objects for Tanzu Kubernetes Grid Integrated Edition](nsxt-create-objects.html).
Note: By default, Pods IP Block is a block of non-routable, private IP addresses. -After you deploy <%= vars.product_short %>, you can define a network profile that specifies a routable IP block for your pods. +After you deploy Tanzu Kubernetes Grid Integrated Edition, you can define a network profile that specifies a routable IP block for your pods. The routable IP block overrides the default non-routable Pods IP Block when a Kubernetes cluster is deployed using that network profile. For more information, see Routable Pods in Using Network Profiles (VMware NSX Only).
Note: You can use a smaller nodes block size for no-NAT environments with a limited number of routable subnets. For example, /20 allows up to 16 Kubernetes clusters to be created.
@@ -126,11 +126,11 @@ For example, /20 allows up to 16 Kubernetes clusters to be created.Note: -Do not use reserved IP addresses or CIDR blocks when configuring <%= vars.k8s_runtime_abbr %>. +Do not use reserved IP addresses or CIDR blocks when configuring TKGI.
Worker node VM | No. |
- containerd is installed on each <%= vars.product_short %> worker node and is assigned the 172.17.0.0/16 network interface.
+ containerd is installed on each Tanzu Kubernetes Grid Integrated Edition worker node and is assigned the 172.17.0.0/16 network interface.
Do not use this CIDR range for any TKGI component, including Ops Manager, BOSH Director, the TKGI API VM, the TKGI DB VM, and the Harbor Registry VM. Note: This range is also reserved for the Management Console VM, but is unused. |
@@ -167,7 +167,7 @@ When deploying <%= vars.k8s_runtime_abbr %>, do not use a reserved IP address or
Management Console VM | Yes. See OVA configuration. |
- The <%= vars.product_short %> Management Console runs the Docker daemon and reserves 172.18.0.0/16 for the subnet.
+ The Tanzu Kubernetes Grid Integrated Edition Management Console runs the Docker daemon and reserves 172.18.0.0/16 for the subnet.
Do not use this CIDR range unless you customize them during OVA configuration. |
Management Console VM | Yes. See OVA configuration. |
- The <%= vars.product_short %> Management Console runs the Docker daemon and reserves 172.18.0.1 for the gateway.
+ The Tanzu Kubernetes Grid Integrated Edition Management Console runs the Docker daemon and reserves 172.18.0.1 for the gateway.
Do not use this CIDR range or IP address unless you customize them during OVA configuration. |
@@ -195,9 +195,9 @@ When deploying <%= vars.k8s_runtime_abbr %>, do not use a reserved IP address or
EmailAddress
.EmailAddress
.
Note: The Update all clusters errand must be enabled to update the Kubernetes cloud provider password stored in Kubernetes clusters.
@@ -53,7 +53,7 @@ to update the Kubernetes cloud provider password stored in Kubernetes clusters.< ## Manage Your NSX Manager Password (vSphere and vSphere with NSX only) If you are on vSphere or vSphere with NSX only, you also configured the **NSX Manager Account** and password -when you installed <%= vars.product_short %>. This service account is configured in the BOSH Director tile. +when you installed Tanzu Kubernetes Grid Integrated Edition. This service account is configured in the BOSH Director tile. After changing the password on your network, you must also update the BOSH Director tile's copy of the **NSX Manager Account** password. @@ -81,7 +81,7 @@ file includes the correct vCenter credentials. You see errors similar to the following in your logs: -* Service account errors in the <%= vars.k8s_runtime_abbr %> logs: +* Service account errors in the TKGI logs: ``` error ... Failed to authenticate user ... diff --git a/pod-security-admission.html.md.erb b/pod-security-admission.html.md.erb index d9eb4b003..2f9a58013 100644 --- a/pod-security-admission.html.md.erb +++ b/pod-security-admission.html.md.erb @@ -3,23 +3,23 @@ title: Pod Security Admission in Tanzu Kubernetes Grid Integrated Edition owner: TKGI --- -This topic describes how to use Kubernetes Pod Security Admission (PSA) with <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>). +This topic describes how to use Kubernetes Pod Security Admission (PSA) with VMware Tanzu Kubernetes Grid Integrated Edition (TKGI). > **Note** Support for Kubernetes Pod Security Policy (PSP) has been removed in Kubernetes v1.25. ## About Pod Security Admission -PSA is the Kubernetes-recommended way to implement security standards. <%= vars.k8s_runtime_abbr %> supports the built-in PSA in Kubernetes. -PSA is enabled in <%= vars.k8s_runtime_abbr %>, by default. +PSA is the Kubernetes-recommended way to implement security standards. TKGI supports the built-in PSA in Kubernetes. +PSA is enabled in TKGI, by default. -In <%= vars.k8s_runtime_abbr %>, you can configure PSA in a cluster or in a custom namespace. +In TKGI, you can configure PSA in a cluster or in a custom namespace. For more information on PSA, see [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) in the Kubernetes documentation. -## Pod Security Admission in a <%= vars.k8s_runtime_abbr %> Cluster +## Pod Security Admission in a TKGI Cluster -You can configure cluster-specific PSA in <%= vars.k8s_runtime_abbr %> by using a Kubernetes profile. +You can configure cluster-specific PSA in TKGI by using a Kubernetes profile. 1. Create the `psa-cluster` yaml file containing the following information: @@ -52,10 +52,10 @@ You can configure cluster-specific PSA in <%= vars.k8s_runtime_abbr %> by using - `AUDIT-VERSION` is the version for auditing a possible security policy violation. VMware strongly recommends using `latest` for the audit version. - `WARN-LEVEL` is the level for triggering a warning for a security policy violation. Use a level that is accepted by Kubernetes, for example, `privileged`, `baseline`, or `restricted`. - `WARN-VERSION` is the version for the warning that is triggered for a security policy violation. VMware strongly recommends using `latest` for the warn version. - - `CUSTOM-NAMESPACES` is the <%= vars.k8s_runtime_abbr %> custom namespaces that you want to exclude. + - `CUSTOM-NAMESPACES` is the TKGI custom namespaces that you want to exclude.Note: If you had configured any experimental admission control features by using a Kubernetes profile in the previous version - of <%= vars.k8s_runtime_abbr %>, you must append it under the `plugin` field in the `psa-cluster` yaml file. + of TKGI, you must append it under the `plugin` field in the `psa-cluster` yaml file.
1. Create the `config-psa-custom` json file containing the following information: @@ -86,16 +86,16 @@ For more information about configuring and using Kubernetes Profiles with TKGI, For more information about configuring cluster-level PSA, see [Enforce Pod Security Standards by Configuring the Built-in Admission Controller](https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/#configure-the-admission-controller) in the Kubernetes documentation. -## Pod Security Admission in a <%= vars.k8s_runtime_abbr %> Custom Namespace +## Pod Security Admission in a TKGI Custom Namespace -> **Note** To control the PSA security permissions in a <%= vars.k8s_runtime_abbr %> namespace, you must have the privileges to create, update, or +> **Note** To control the PSA security permissions in a TKGI namespace, you must have the privileges to create, update, or patch the namespace. To ensure security of the system, restrict the namespace permissions to the trusted user accounts. -The following table describes the required PSA level for <%= vars.k8s_runtime_abbr %> System namespaces: +The following table describes the required PSA level for TKGI System namespaces:<%= vars.k8s_runtime_abbr %> System Namespace | +TKGI System Namespace | PSA Level | |
---|---|---|---|
http_proxy |
HTTP proxy URL and credentials. This overrides the global HTTP Proxy settings - in the <%= vars.k8s_runtime_abbr %> tile > Networking pane. | + in the TKGI tile > Networking pane.||
https_proxy |
HTTPS proxy URL and credentials. This overrides the global HTTP Proxy settings - in the <%= vars.k8s_runtime_abbr %> tile. | + in the TKGI tile.||
no_proxy |
diff --git a/proxies.html.md.erb b/proxies.html.md.erb
index aaa1d4bb1..72869a23e 100644
--- a/proxies.html.md.erb
+++ b/proxies.html.md.erb
@@ -4,41 +4,41 @@ owner: TKGI
topic: proxies-nsx-t
---
-This topic describes how HTTP/HTTPS proxies work in <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) with NSX,
+This topic describes how HTTP/HTTPS proxies work in VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) with NSX,
and how to set proxies globally.
-To configure proxy settings specifically for individual <%= vars.k8s_runtime_abbr %> clusters, see [Configure Cluster Proxies](proxies-cluster.html).
+To configure proxy settings specifically for individual TKGI clusters, see [Configure Cluster Proxies](proxies-cluster.html).
##Overview
-If your environment includes HTTP proxies, you can configure <%= vars.product_short %> with NSX to use these proxies so that <%= vars.product_short %>-deployed Kubernetes control plane and worker nodes access public Internet services and other internal services through a proxy.
+If your environment includes HTTP proxies, you can configure Tanzu Kubernetes Grid Integrated Edition with NSX to use these proxies so that Tanzu Kubernetes Grid Integrated Edition-deployed Kubernetes control plane and worker nodes access public Internet services and other internal services through a proxy.
-In addition, <%= vars.product_short %> proxy settings apply to the <%= vars.control_plane %> instance.
-When an <%= vars.product_short %> operator creates a Kubernetes cluster, the <%= vars.control_plane %> VM behind a proxy is able to manage NSX objects on the standard network.
+In addition, Tanzu Kubernetes Grid Integrated Edition proxy settings apply to the TKGI API instance.
+When an Tanzu Kubernetes Grid Integrated Edition operator creates a Kubernetes cluster, the TKGI API VM behind a proxy is able to manage NSX objects on the standard network.
-You can also proxy outgoing HTTP/HTTPS traffic from Ops Manager and the BOSH Director so that all <%= vars.product_short %> components use the same proxy service.
+You can also proxy outgoing HTTP/HTTPS traffic from Ops Manager and the BOSH Director so that all Tanzu Kubernetes Grid Integrated Edition components use the same proxy service.
The following diagram illustrates the network architecture:
-|||
Velero | -<%= vars.velero_version %>* | +<%= vars.velero_version %>* | Release Notes |
Note: The component versions supported by <%= vars.k8s_runtime_abbr %> Management Console might differ from or be more limited than - the versions supported by <%= vars.k8s_runtime_abbr %>. +
Note: The component versions supported by TKGI Management Console might differ from or be more limited than + the versions supported by TKGI.
March 5, 2024 | |||
Installed <%= vars.k8s_runtime_abbr %> version | +Installed TKGI version | v1.18.2 | |
Velero | -<%= vars.velero_version %>* | +<%= vars.velero_version %>* | Release Notes |
Important: To address CVE-2024-21626 by patching <%= vars.k8s_runtime_abbr %> with a runc upgrade, see [High-Severity CVE-2024-21626 in runc 1.1.11 and earlier](#1-18-0-cve-2024-21626) below. +
Important: To address CVE-2024-21626 by patching TKGI with a runc upgrade, see [High-Severity CVE-2024-21626 in runc 1.1.11 and earlier](#1-18-0-cve-2024-21626) below.
### Deprecations
-No <%= vars.k8s_runtime_abbr %> features have been deprecated
-or removed from <%= vars.k8s_runtime_abbr %> v1.18.
+No TKGI features have been deprecated
+or removed from TKGI v1.18.
Note: The component versions supported by <%= vars.k8s_runtime_abbr %> Management Console might differ from or be more limited than - the versions supported by <%= vars.k8s_runtime_abbr %>. +
Note: The component versions supported by TKGI Management Console might differ from or be more limited than + the versions supported by TKGI.
December 19, 2023 | |||
Installed <%= vars.k8s_runtime_abbr %> version | +Installed TKGI version | v1.18.1 | |
Velero | -<%= vars.velero_version %>* | +<%= vars.velero_version %>* | Release Notes |
Note: Cluster monitoring continues to use Telegraf v1.13.2. -
-<% end %> * **The Out-of-Tree Kubernetes AWS Cloud Provider Requires Additional Permissions**: -In AWS environments, <%= vars.k8s_runtime_abbr %> v1.18 integrates the out-of-tree AWS cloud provider for Kubernetes. -The Kubernetes AWS out-of-tree cloud provider requires a different AWS configuration than was required by the in-tree Kubernetes AWS cloud provider used by previous <%= vars.k8s_runtime_abbr %> versions. -Basic cloud provider functions will fail in <%= vars.k8s_runtime_abbr %> v1.18 if the AWS out-of-tree cloud provider requirements are not met. +In AWS environments, TKGI v1.18 integrates the out-of-tree AWS cloud provider for Kubernetes. +The Kubernetes AWS out-of-tree cloud provider requires a different AWS configuration than was required by the in-tree Kubernetes AWS cloud provider used by previous TKGI versions. +Basic cloud provider functions will fail in TKGI v1.18 if the AWS out-of-tree cloud provider requirements are not met. For more information, see [AWS Permissions Errors When Using the Out-of-Tree Kubernetes AWS Cloud Provider](troubleshoot-issues.html#aws-cpi-permissions) in _General Troubleshooting_. * **Windows Stemcells Must Be Updated to Expose Ethernet Adapter Information**: -By default, Windows worker node VM Ethernet adapter information is not exposed on <%= vars.k8s_runtime_abbr %> Windows clusters. When creating BOSH Windows stemcells for <%= vars.k8s_runtime_abbr %> v1.18, you must configure your base Windows OS image to expose Ethernet adapter information. +By default, Windows worker node VM Ethernet adapter information is not exposed on TKGI Windows clusters. When creating BOSH Windows stemcells for TKGI v1.18, you must configure your base Windows OS image to expose Ethernet adapter information. For more information, see [Expose Ethernet Adapter Information on Worker Node VMs](https://docs-staging.vmware.com/en/VMware-Tanzu-Kubernetes-Grid-Integrated-Edition/1.18/tkgi/GUID-create-vsphere-stemcell.html#expose-guest-net) in the revised _Creating a Windows Stemcell for vSphere Using Stembuild_ procedure.Note: You must grant the AWS Worker Instance Profile
additional AWS Identity and Access Management (IAM) permissions before using
the Antrea Egress feature with worker nodes on AWS. For more information, see
@@ -862,13 +840,13 @@ in _Deploying and Managing Cloud Native Storage (CNS) on vSphere_.
### Deprecations
-The following <%= vars.k8s_runtime_abbr %> features have been deprecated or removed from <%= vars.k8s_runtime_abbr %> <%= vars.product_version %>:
+The following TKGI features have been deprecated or removed from TKGI <%= vars.product_version %>:
* **Google Cloud Platform**: Support for the Google Cloud Platform (GCP) is deprecated.
-Support for GCP will be entirely removed in <%= vars.k8s_runtime_abbr %> v1.19.
+Support for GCP will be entirely removed in TKGI v1.19.
* **Flannel Support**: Support for the Flannel Container Networking Interface (CNI) is deprecated.
-Support for Flannel will be entirely removed in <%= vars.k8s_runtime_abbr %> v1.19.
+Support for Flannel will be entirely removed in TKGI v1.19.
<%= vars.recommended_by %> recommends that you switch your Flannel CNI-configured clusters to the Antrea CNI.
For more information about Flannel CNI deprecation, see
[About Switching from the Flannel CNI to the Antrea CNI](understanding-upgrades.html#cni)
@@ -877,13 +855,13 @@ in _About Tanzu Kubernetes Grid Integrated Edition Upgrades_.
###Known Issues
-<%= vars.k8s_runtime_abbr %> v1.18.0 has the following known issues:
+TKGI v1.18.0 has the following known issues:
Note: The component versions supported by <%= vars.k8s_runtime_abbr %> Management Console might differ from or be more limited than - the versions supported by <%= vars.k8s_runtime_abbr %>. +
Note: The component versions supported by TKGI Management Console might differ from or be more limited than + the versions supported by TKGI.
November 02, 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Installed <%= vars.k8s_runtime_abbr %> version | +Installed TKGI version | v1.18.0 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Warning:
-Never use the CredHub Maestro maestro regenerate ca/leaf --all
command to rotate <%= vars.k8s_runtime_abbr %> certificates.
+Never use the CredHub Maestro maestro regenerate ca/leaf --all
command to rotate TKGI certificates.
WARNING: Rotate cluster certificates only on a <%= vars.k8s_runtime_abbr %> cluster that has been upgraded to the current <%= vars.k8s_runtime_abbr %> version. For more information, see Tasks Supported Following a <%= vars.k8s_runtime_abbr %> Control Plane Upgrade in About <%= vars.product_short %> Upgrades. +
WARNING: Rotate cluster certificates only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
### Rotate All Cluster Certificates Except NSX @@ -137,7 +137,7 @@ tkgi rotate-certificates CLUSTER-NAME --skip-nsx --all This command rotates [all certificates](#cluster-certs) except `tls-nsx-t` and `tls-nsx-lb`. -WARNING: Rotate cluster certificates only on a <%= vars.k8s_runtime_abbr %> cluster that has been upgraded to the current <%= vars.k8s_runtime_abbr %> version. For more information, see Tasks Supported Following a <%= vars.k8s_runtime_abbr %> Control Plane Upgrade in About <%= vars.product_short %> Upgrades. +
WARNING: Rotate cluster certificates only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
@@ -161,7 +161,7 @@ You are about to rotate nsx related certificates for cluster tkgi-cluster-01. Th For more information, see [Rotate NSX Certificates for Kubernetes Clusters](./nsxt-certs-rotate.html). -WARNING: Rotate cluster certificates only on a <%= vars.k8s_runtime_abbr %> cluster that has been upgraded to the current <%= vars.k8s_runtime_abbr %> version. For more information, see Tasks Supported Following a <%= vars.k8s_runtime_abbr %> Control Plane Upgrade in About <%= vars.product_short %> Upgrades. +
WARNING: Rotate cluster certificates only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
@@ -188,7 +188,7 @@ in _Release Notes_ for additional requirements. For complete usage, see [Use a Custom CA for Kubernetes Clusters](./custom-ca.html). -WARNING: Rotate cluster certificates only on a <%= vars.k8s_runtime_abbr %> cluster that has been upgraded to the current <%= vars.k8s_runtime_abbr %> version. For more information, see Tasks Supported Following a <%= vars.k8s_runtime_abbr %> Control Plane Upgrade in About <%= vars.product_short %> Upgrades. +
WARNING: Rotate cluster certificates only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
@@ -213,5 +213,5 @@ Flags: --wait Wait for the operation to finish ``` -WARNING: Rotate cluster certificates only on a <%= vars.k8s_runtime_abbr %> cluster that has been upgraded to the current <%= vars.k8s_runtime_abbr %> version. For more information, see Tasks Supported Following a <%= vars.k8s_runtime_abbr %> Control Plane Upgrade in About <%= vars.product_short %> Upgrades. +
WARNING: Rotate cluster certificates only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
diff --git a/rotate-tile-certificates.html.md.erb b/rotate-tile-certificates.html.md.erb index cb2fc720a..9be088b04 100644 --- a/rotate-tile-certificates.html.md.erb +++ b/rotate-tile-certificates.html.md.erb @@ -4,29 +4,29 @@ owner: TKGI --- This topic describes how to rotate certificates used only by -the <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) control plane and tile. +the VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) control plane and tile. -This topic covers rotating <%= vars.k8s_runtime_abbr %> control plane certificates only. -For more information about <%= vars.k8s_runtime_abbr %> Certificates: +This topic covers rotating TKGI control plane certificates only. +For more information about TKGI Certificates: -* For conceptual information about certificates in <%= vars.k8s_runtime_abbr %>, -see [<%= vars.k8s_runtime_abbr %> Certificates](certificate-concepts.html). -* To rotate the certificates used by <%= vars.k8s_runtime_abbr %>-deployed Kubernetes clusters, +* For conceptual information about certificates in TKGI, +see [TKGI Certificates](certificate-concepts.html). +* To rotate the certificates used by TKGI-deployed Kubernetes clusters, see [Rotating Cluster Certificates](./rotate-cluster-certificates.html). -Warning: If you use the <%= vars.k8s_runtime_abbr %> Management Console - to manage <%= vars.k8s_runtime_abbr %> on vSphere with NSX, +
Warning: If you use the TKGI Management Console + to manage TKGI on vSphere with NSX, you must use the Management Console to rotate the NSX Manager CA Certificate. - To manage your NSX Manager CA Certificate using the <%= vars.k8s_runtime_abbr %> Management Console, see + To manage your NSX Manager CA Certificate using the TKGI Management Console, see Which Options Can I Reconfigure? - in Reconfigure Your <%= vars.product_short %> Deployment. + in Reconfigure Your Tanzu Kubernetes Grid Integrated Edition Deployment.
## Overview -<%= vars.k8s_runtime_abbr %> control plane certificates, and their leaf certificates, are automatically generated by <%= vars.k8s_runtime_abbr %> +TKGI control plane certificates, and their leaf certificates, are automatically generated by TKGI during installation: * `pxc_server_ca` @@ -36,18 +36,18 @@ during installation: Control plane certificates have a default expiration period of four years. -To rotate <%= vars.k8s_runtime_abbr %> control plane certificates, +To rotate TKGI control plane certificates, first determine which certificates are due to expire and then rotate them: * [Check Certificate Expiration Dates](#expiration) * [Rotate TKGI Control Plane Certificates](#control) -The procedures below can be used to rotate <%= vars.k8s_runtime_abbr %> control plane certificates, -certificates for <%= vars.k8s_runtime_abbr %> communication with underlying Ops Manager and BOSH infrastructure, and +The procedures below can be used to rotate TKGI control plane certificates, +certificates for TKGI communication with underlying Ops Manager and BOSH infrastructure, and certificates for components such as database, CredHub, UAA, and Telemetry.Warning:
-Never use the CredHub Maestro maestro regenerate ca/leaf --all
command to rotate <%= vars.k8s_runtime_abbr %> certificates.
+Never use the CredHub Maestro maestro regenerate ca/leaf --all
command to rotate TKGI certificates.
WARNING: Do not change the number of control plane/etcd nodes for any plan that was used to create currently-running clusters. -<%= vars.product_short %> does not support changing the number of control plane/etcd nodes for plans +Tanzu Kubernetes Grid Integrated Edition does not support changing the number of control plane/etcd nodes for plans with existing clusters.
-## Scale Horizontally by Changing the Number of Worker Nodes Using the <%= vars.k8s_runtime_abbr %> CLI +## Scale Horizontally by Changing the Number of Worker Nodes Using the TKGI CLI -You can use the <%= vars.k8s_runtime_abbr %> CLI to scale an existing cluster by increasing or decreasing the number of worker nodes in the cluster. +You can use the TKGI CLI to scale an existing cluster by increasing or decreasing the number of worker nodes in the cluster. To increase or decrease the number of worker nodes on a cluster: @@ -47,7 +47,7 @@ To increase or decrease the number of worker nodes on a cluster: worker nodes. * To scale up your existing cluster, enter a number higher than the current number of worker nodes. The maximum number of worker nodes you can set is configured in the **Plan** pane of - the <%= vars.product_tile %> tile in Ops Manager. + the Tanzu Kubernetes Grid Integrated Edition tile in Ops Manager.Note: VMware recommends that you avoid using the
tkgi resize
command to perform resizing operations.
Note: This command might roll additional virtual machines in the cluster, which can affect workloads if the worker nodes are at capacity.
-## Scale Vertically by Changing Cluster Node VM Sizes in the <%= vars.k8s_runtime_abbr %> Tile +## Scale Vertically by Changing Cluster Node VM Sizes in the TKGI Tile You can scale an existing cluster vertically by changing the size of the control plane or worker node VMs. When you do this, BOSH recreates the VMs sequentially, one cluster at a time, and one node after another within the cluster. For more information, see -[VM Sizing for <%= vars.k8s_runtime_abbr %> Clusters](vm-sizing.html). +[VM Sizing for TKGI Clusters](vm-sizing.html). To change the size of a Kubernetes cluster node VM, complete the following steps: 1. Log in to Ops Manager. -1. Select the <%= vars.k8s_runtime_abbr %> tile. +1. Select the TKGI tile. 1. Select the plan that is in use by the cluster(s) you want to resize. 1. To change the VM size: - For Control Plane nodes, select the desired VM size from the **Master/ETCD VM Type** menu. - For Worker nodes, select the desired VM size from the **Worker VM Type** menu. -Note: See Customize Control Plane and Worker Node VM Size and Type for information on creating a custom VM size for use with a <%= vars.k8s_runtime_abbr %> cluster.
+Note: See Customize Control Plane and Worker Node VM Size and Type for information on creating a custom VM size for use with a TKGI cluster.
1. Click **Save** to preserve tile changes. 1. At the **Installation Dashboard**, click **Review Pending Changes**. -Note: Support for SecurityContextDeny admission controller has been removed in <%= vars.k8s_runtime_abbr %> v1.18. The SecurityContextDeny admission controller has been deprecated, and the Kubernetes community recommends the controller not be used. -Pod security admission (PSA) is the preferred method for providing a more secure Kubernetes environment. For more information about PSA, see Pod Security Admission in <%= vars.k8s_runtime_abbr %>. +
Note: Support for SecurityContextDeny admission controller has been removed in TKGI v1.18. The SecurityContextDeny admission controller has been deprecated, and the Kubernetes community recommends the controller not be used. +Pod security admission (PSA) is the preferred method for providing a more secure Kubernetes environment. For more information about PSA, see Pod Security Admission in TKGI.
@@ -31,20 +31,20 @@ This section describes the impact of enabling the SecurityContextDeny admission **New Cluster.** If you enable the SecurityContextDeny admission plugin in a plan and deploy a new Kubernetes cluster based on that plan, cluster users will not be able to create securityContext capabilities on that cluster. **Existing Cluster.** If you enable the SecurityContextDeny admission plugin in a plan and update a Kubernetes cluster, cluster users will no longer be able to create securityContext capabilities on that cluster. -This assumes you enable **Upgrade all clusters errand** or update your cluster individually through the <%= vars.k8s_runtime_abbr %> Command Line Interface (<%= vars.k8s_runtime_abbr %> CLI). +This assumes you enable **Upgrade all clusters errand** or update your cluster individually through the TKGI Command Line Interface (TKGI CLI). ## Enabling the SecurityContextDeny Admission Plugin To enable the SecurityContextDeny admission plugin: -1. In the <%= vars.k8s_runtime_abbr %> tile, select the desired Plan, such as Plan 1. +1. In the TKGI tile, select the desired Plan, such as Plan 1. 1. At the bottom of the configuration panel, select the **SecurityContextDeny** option.Note: This task is optional. Perform it after considering the types of apps you have deployed. For example, stateful, stateless, or legacy apps.
###Step 4: Shut Down Kubernetes Clusters -Shut down all <%= vars.product_short %>-provisioned Kubernetes clusters following the procedure defined in the How to shutdown and startup a Multi Control Plane Node <%= vars.k8s_runtime_abbr %> cluster knowledge base article. +Shut down all Tanzu Kubernetes Grid Integrated Edition-provisioned Kubernetes clusters following the procedure defined in the How to shutdown and startup a Multi Control Plane Node TKGI cluster knowledge base article. For each Kubernetes cluster that you intend to shut down, do the following: -1. Using the BOSH CLI, retrieve the BOSH deployment name of your <%= vars.product_short %> clusters by running the following command: +1. Using the BOSH CLI, retrieve the BOSH deployment name of your Tanzu Kubernetes Grid Integrated Edition clusters by running the following command: ``` bosh deployments @@ -83,7 +83,7 @@ For each Kubernetes cluster that you intend to shut down, do the following: bosh -d service-instance_CLUSTER-UUID stop windows-worker ``` - Where `CLUSTER-UUID` is the BOSH deployment name of your <%= vars.product_short %> cluster. + Where `CLUSTER-UUID` is the BOSH deployment name of your Tanzu Kubernetes Grid Integrated Edition cluster. For example: @@ -101,7 +101,7 @@ For each Kubernetes cluster that you intend to shut down, do the following: bosh -d service-instance_CLUSTER-UUID stop master ``` - Where `CLUSTER-UUID` is the BOSH deployment name of your <%= vars.product_short %> cluster. + Where `CLUSTER-UUID` is the BOSH deployment name of your Tanzu Kubernetes Grid Integrated Edition cluster. For example: ```console @@ -116,23 +116,23 @@ For each Kubernetes cluster that you intend to shut down, do the following: [View a larger version of this image.](images/nsxt/shutdown/shutdown-k8s-nodes.png) -###Step 5: Stop the <%= vars.k8s_runtime_abbr %> Control Plane +###Step 5: Stop the TKGI Control Plane -To shut down the <%= vars.k8s_runtime_abbr %> control plane, stop and shut down the <%= vars.control_plane %> and <%= vars.control_plane_db %> VMs as follows: +To shut down the TKGI control plane, stop and shut down the TKGI API and TKGI Database VMs as follows: -1. [Stop <%= vars.k8s_runtime_abbr %> Control Plane Processes](#stop-tkgi-control) -1. [Shut Down the <%= vars.k8s_runtime_abbr %> API and Database VMs](#shutdown-tkgi-vms) +1. [Stop TKGI Control Plane Processes](#stop-tkgi-control) +1. [Shut Down the TKGI API and Database VMs](#shutdown-tkgi-vms) -####Stop <%= vars.k8s_runtime_abbr %> Control Plane Processes +####Stop TKGI Control Plane Processes -To stop <%= vars.product_short %> control plane processes and services, do the following: +To stop Tanzu Kubernetes Grid Integrated Edition control plane processes and services, do the following: -1. Using the BOSH CLI, retrieve the BOSH deployment ID of your <%= vars.product_short %> deployment by running the following command: +1. Using the BOSH CLI, retrieve the BOSH deployment ID of your Tanzu Kubernetes Grid Integrated Edition deployment by running the following command: ``` bosh deployments ``` - The <%= vars.product_short %> deployment ID is `pivotal-container-service-` followed by a unique BOSH-generated hash. + The Tanzu Kubernetes Grid Integrated Edition deployment ID is `pivotal-container-service-` followed by a unique BOSH-generated hash. 1. Stop the TKGI control plane VM by running the following command: @@ -148,16 +148,16 @@ To stop <%= vars.product_short %> control plane processes and services, do the f $ bosh -d pivotal-container-service-1bf7b02738056cdc37e6 stop ``` -####Shut Down the <%= vars.k8s_runtime_abbr %> API and Database VMs +####Shut Down the TKGI API and Database VMs -To shut down the <%= vars.control_plane %> and <%= vars.control_plane_db %> VMs, do the following: +To shut down the TKGI API and TKGI Database VMs, do the following: -1. Run the `bosh vms` command to list your <%= vars.product_short %> control plane VMs. +1. Run the `bosh vms` command to list your Tanzu Kubernetes Grid Integrated Edition control plane VMs. ``` bosh -d pivotal-container-service-DEPLOYMENT-ID vms ``` - Where `DEPLOYMENT-ID` is the BOSH-generated ID of your <%= vars.product_short %> deployment. + Where `DEPLOYMENT-ID` is the BOSH-generated ID of your Tanzu Kubernetes Grid Integrated Edition deployment. For example: @@ -166,25 +166,16 @@ To shut down the <%= vars.control_plane %> and <%= vars.control_plane_db %> VMs, ``` 1. Review the `bosh vms` output: - * Record the <%= vars.control_plane %> VM name, + * Record the TKGI API VM name, listed under **Instances** as `pivotal-container-service/` followed by a unique BOSH-generated hash. - * Record the <%= vars.control_plane_db %> VM name(s), -listed under **Instances** as `pks-db/` followed by a unique BOSH-generated hash. -<% if vars.product_version == "COMMENTED" %> -1. If any <%= vars.control_plane_db %> VMs are not stopped, run `bosh stop` for each to shut them down: - - ``` - bosh -d TKGI-DATABASE-VM-ID stop - ``` + * Record the TKGI Database VM name(s), +listed under **Instances** as `pks-db/` followed by a unique BOSH-generated hash. - Where `TKGI-DATABASE-VM-ID` is the name of the <%= vars.control_plane_db %> VM. -<% end %> +1. Using your IaaS dashboard, locate and gracefully shut down the TKGI control plane VMs: + 1. The TKGI API VMs. + 1. The TKGI Database VMs. -1. Using your IaaS dashboard, locate and gracefully shut down the <%= vars.k8s_runtime_abbr %> control plane VMs: - 1. The <%= vars.control_plane %> VMs. - 1. The <%= vars.control_plane_db %> VMs. - -Note: For more information about the bootstrap
errand, see Run the Bootstrap Errand in the VMware Tanzu SQL with MySQL for VMs documentation.
Note: <%= vars.product_short %> does not currently support the Kubernetes Service Catalog and the GCP Service Broker.
+Note: Tanzu Kubernetes Grid Integrated Edition does not currently support the Kubernetes Service Catalog and the GCP Service Broker.
diff --git a/support-windows-index.html.md.erb b/support-windows-index.html.md.erb index 282022cd4..fc8dbdf04 100644 --- a/support-windows-index.html.md.erb +++ b/support-windows-index.html.md.erb @@ -3,7 +3,7 @@ title: Supporting Windows Clusters owner: TKGI --- -The following topics describe how to support Windows worker-based Kubernetes clusters provisioned by <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>): +The following topics describe how to support Windows worker-based Kubernetes clusters provisioned by VMware Tanzu Kubernetes Grid Integrated Edition (TKGI):Note: <%= vars.product_short %> Cluster tagging requires Ops Manager v2.8.0 or later. +
Note: Tanzu Kubernetes Grid Integrated Edition Cluster tagging requires Ops Manager v2.8.0 or later.
## Tag Your Clusters as They Are Created @@ -43,7 +43,7 @@ $ tkgi create-cluster my-cluster --tags "client:example.com, costcenter:pettycas ## Tag Your Existing Clusters -You can use the <%= vars.k8s_runtime_abbr %> CLI to tag an existing cluster. +You can use the TKGI CLI to tag an existing cluster. To apply tags to your existing cluster's VMs: @@ -71,7 +71,7 @@ and specify the `--tags` parameter and a comma-delimited list of `key:value` pai $ tkgi update-cluster my-cluster --tags "status:non-billable, region:northwest" ``` -WARNING: Update a cluster with a revised tags
only on a <%= vars.k8s_runtime_abbr %> cluster that has been upgraded to the current <%= vars.k8s_runtime_abbr %> version. For more information, see Tasks Supported Following a <%= vars.k8s_runtime_abbr %> Control Plane Upgrade in About <%= vars.product_short %> Upgrades.
+
WARNING: Update a cluster with a revised tags
only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
WARNING: Update a cluster with a revised tags
only on a <%= vars.k8s_runtime_abbr %> cluster that has been upgraded to the current <%= vars.k8s_runtime_abbr %> version. For more information, see Tasks Supported Following a <%= vars.k8s_runtime_abbr %> Control Plane Upgrade in About <%= vars.product_short %> Upgrades.
+
WARNING: Update a cluster with a revised tags
only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
WARNING: Update a cluster with a revised tags
only on a <%= vars.k8s_runtime_abbr %> cluster that has been upgraded to the current <%= vars.k8s_runtime_abbr %> version. For more information, see Tasks Supported Following a <%= vars.k8s_runtime_abbr %> Control Plane Upgrade in About <%= vars.product_short %> Upgrades.
+
WARNING: Update a cluster with a revised tags
only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
Note:
@@ -170,7 +170,7 @@ The tagging you apply must adhere to the following rules:
* The value can contain a maximum of 80 alphanumeric characters.
* Tag keys and values must not include any of the following symbols: `"`, `:`, `,`.
* Surrounding double quotes are required if there are one or more spaces in your tag list, such as a space after a comma delimiter.
-* Tag keys and values must adhere to the tagging rules of the IaaS hosting your <%= vars.product_short %> environment.
+* Tag keys and values must adhere to the tagging rules of the IaaS hosting your Tanzu Kubernetes Grid Integrated Edition environment.
For information about IaaS-specific tagging rules see the following:
@@ -179,29 +179,6 @@ For information about IaaS-specific tagging rules see the following:
[Use tags to organize your Azure resources](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/tag-resources) in the Azure documentation.
* vSphere: See
[vSphere Tags and Attributes](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vcenterhost.doc/GUID-E8E854DD-AA97-4E0C-8419-CE84F93C4058.html) in the vSphere documentation.
-<% if vars.product_version == "COMMENTED" %>
-<%#=
-
For information about tagging rules see the following:
-
IaaS | -Tagging Rule Documentation | -
---|---|
Azure | -See Use tags to organize your Azure resources in the Azure documentation. | -
AWS | -See User-Defined Tag Restrictions in the AWS documentation. | -
GCP | -See Labelling and grouping your Google Cloud Platform resources in the GCP documentation. | -
Note: <%= vars.product_short %> does not collect any personally identifiable information (PII) at either participation level. -For a list of the data <%= vars.product_short %> collects, see Data Dictionary.
+Note: Tanzu Kubernetes Grid Integrated Edition does not collect any personally identifiable information (PII) at either participation level. +For a list of the data Tanzu Kubernetes Grid Integrated Edition collects, see Data Dictionary.
### Configure CEIP To configure CEIP, see the _VMware CEIP_ section of the installation topic for your IaaS: - * [Installing <%= vars.product_short %> on vSphere](./installing-vsphere.html#telemetry) - * [Installing <%= vars.product_short %> on vSphere with NSX](./installing-nsx-t.html#telemetry) - * [Installing <%= vars.product_short %> on AWS](./installing-aws.html#telemetry) - * [Installing <%= vars.product_short %> on Azure](./installing-azure.html#telemetry) - * [Installing <%= vars.product_short %> on GCP](./installing-gcp.html#telemetry) + * [Installing Tanzu Kubernetes Grid Integrated Edition on vSphere](./installing-vsphere.html#telemetry) + * [Installing Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX](./installing-nsx-t.html#telemetry) + * [Installing Tanzu Kubernetes Grid Integrated Edition on AWS](./installing-aws.html#telemetry) + * [Installing Tanzu Kubernetes Grid Integrated Edition on Azure](./installing-azure.html#telemetry) + * [Installing Tanzu Kubernetes Grid Integrated Edition on GCP](./installing-gcp.html#telemetry) #### Proxy Communication -If you use a proxy server, the <%= vars.product_short %> proxy settings apply to outgoing CEIP data. +If you use a proxy server, the Tanzu Kubernetes Grid Integrated Edition proxy settings apply to outgoing CEIP data. -To configure <%= vars.product_short %> proxy settings for CEIP and other communications, see the following: +To configure Tanzu Kubernetes Grid Integrated Edition proxy settings for CEIP and other communications, see the following: * For AWS, see [Using Proxies with Tanzu Kubernetes Grid Integrated Edition on AWS](proxies-aws.html). * For vSphere, see [Networking](installing-vsphere.html#networking) in _Installing Tanzu Kubernetes Grid Integrated Edition on vSphere_. @@ -44,7 +44,7 @@ To configure <%= vars.product_short %> proxy settings for CEIP and other communi The CEIP program use the following components to collect data: -+ **Telemetry Server:** This component runs on the <%= vars.k8s_runtime_abbr %> control plane. The server receives CEIP events from the <%= vars.control_plane %> and metrics from Telemetry agent pods. The server sends events and metrics to a data lake for archiving and analysis. ++ **Telemetry Server:** This component runs on the TKGI control plane. The server receives CEIP events from the TKGI API and metrics from Telemetry agent pods. The server sends events and metrics to a data lake for archiving and analysis. + **Telemetry Agent Pod:** This component runs in each Kubernetes cluster as a deployment with one replica. Agent pods periodically poll the Kubernetes API for cluster metrics and send the metrics to the Telemetry server. @@ -58,4 +58,4 @@ The following diagram shows how CEIP data flows through the system components: ## Data Dictionary -For information about <%= vars.k8s_runtime_abbr %> CEIP collection and reporting, see the [<%= vars.k8s_runtime_abbr %> Telemetry Data](https://docs.google.com/spreadsheets/d/18UCd1kbhR3xV_XOl6KcEU64GI6ySdkRa3iG_8QAROl8/edit#gid=945250226) spreadsheet, hosted on Google Drive. +For information about TKGI CEIP collection and reporting, see the [TKGI Telemetry Data](https://docs.google.com/spreadsheets/d/18UCd1kbhR3xV_XOl6KcEU64GI6ySdkRa3iG_8QAROl8/edit#gid=945250226) spreadsheet, hosted on Google Drive. diff --git a/troubleshoot-issues.html.md.erb b/troubleshoot-issues.html.md.erb index 31a264940..fdd7b20f6 100644 --- a/troubleshoot-issues.html.md.erb +++ b/troubleshoot-issues.html.md.erb @@ -3,14 +3,14 @@ title: General Troubleshooting owner: TKGI --- -This topic assists with diagnosing and troubleshooting issues when installing or using <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>). +This topic assists with diagnosing and troubleshooting issues when installing or using VMware Tanzu Kubernetes Grid Integrated Edition (TKGI). ##Overview Refer to the following for troubleshooting assistance: * [The Fluent Bit Pod Restarts Due to Out-of-Memory Issue](#fluent-bit-memory) -* [<%= vars.control_plane %> is Slow or Times Out](#api-timeout) +* [TKGI API is Slow or Times Out](#api-timeout) * [All Cluster Operations Fail](#cluster-operation-fails) * [Cluster Creation Fails](#cluster-create-fail) * [Cluster Deletion Fails](#cluster-delete-fail) @@ -48,29 +48,29 @@ The Fluent Bit Pod has insufficient memory for your environment's utilization. **Solution** Increase the Fluent Bit Pod memory limit. -For more information, see [Log Sink Resources](installing-vsphere.html#log-sinks) in the _Installing <%= vars.product_short %>_ topic for your IaaS. +For more information, see [Log Sink Resources](installing-vsphere.html#log-sinks) in the _Installing Tanzu Kubernetes Grid Integrated Edition_ topic for your IaaS.Note: In <%= vars.k8s_runtime_abbr %> v1.17.0 and earlier, the <%= vars.k8s_runtime_abbr %> MC Operation Timeout nsx_feign_client_read_timeout
is fixed at 60 seconds and cannot be customized.
+
Note: In TKGI v1.17.0 and earlier, the TKGI MC Operation Timeout nsx_feign_client_read_timeout
is fixed at 60 seconds and cannot be customized.
Note: If you use the <%= vars.k8s_runtime_abbr %> MC, you must configure - the <%= vars.k8s_runtime_abbr %> Operation Timeout in the <%= vars.k8s_runtime_abbr %> MC configuration YAML. +
Note: If you use the TKGI MC, you must configure + the TKGI Operation Timeout in the TKGI MC configuration YAML.
Note: If necessary, you can append the --force
flag to delete the deployment.
Note: Use only lowercase characters in your <%= vars.k8s_runtime_abbr %>-provisioned + Where `CLUSTER-NAME` is the name of your Tanzu Kubernetes Grid Integrated Edition cluster. +
Note: Use only lowercase characters in your TKGI-provisioned Kubernetes cluster names if you manage your clusters with Tanzu Mission Control (TMC). Clusters with names that include an uppercase character cannot be attached to TMC.
-1. To re-create the cluster, run the following <%= vars.k8s_runtime_abbr %> command: +1. To re-create the cluster, run the following TKGI command: ``` tkgi create-cluster CLUSTER-NAME ``` - Where `CLUSTER-NAME` is the name of your <%= vars.product_short %> cluster. + Where `CLUSTER-NAME` is the name of your Tanzu Kubernetes Grid Integrated Edition cluster.Note: Use only lowercase characters when naming your cluster if you manage your clusters with Tanzu Mission Control (TMC). Clusters with names that include an uppercase character cannot be attached to TMC.
@@ -343,10 +343,10 @@ For example, pods cannot resolve DNS names, and error messages report the servic **Explanation** -Kubernetes features and functions are provided by <%= vars.product_short %> add-ons. +Kubernetes features and functions are provided by Tanzu Kubernetes Grid Integrated Edition add-ons. DNS resolution, for example, is provided by the `CoreDNS` service. -To activate these add-ons, Ops Manager must run scripts after deploying <%= vars.product_short %>. You must configure Ops Manager to automatically run these post-deploy scripts. +To activate these add-ons, Ops Manager must run scripts after deploying Tanzu Kubernetes Grid Integrated Edition. You must configure Ops Manager to automatically run these post-deploy scripts. **Solution** @@ -412,11 +412,11 @@ The above command, when applied to each VM, gives your VMs the correct permissio **Symptoms** After making your selection in the **Upgrade all clusters errand** section, the worker node might hang indefinitely. -For more information about monitoring the **Upgrade all clusters errand** using the BOSH CLI, see [Upgrade the <%= vars.k8s_runtime_abbr %> Tile](upgrade.html#upgrade-tile) in _Upgrading <%= vars.product_short %> (Flannel Networking)_. +For more information about monitoring the **Upgrade all clusters errand** using the BOSH CLI, see [Upgrade the TKGI Tile](upgrade.html#upgrade-tile) in _Upgrading Tanzu Kubernetes Grid Integrated Edition (Flannel Networking)_. **Explanation** -During the <%= vars.product_tile %> tile upgrade process, worker nodes are cordoned and drained. This drain is dependent on Kubernetes being able to unschedule all pods. If Kubernetes is unable to unschedule a pod, then the drain hangs indefinitely. +During the Tanzu Kubernetes Grid Integrated Edition tile upgrade process, worker nodes are cordoned and drained. This drain is dependent on Kubernetes being able to unschedule all pods. If Kubernetes is unable to unschedule a pod, then the drain hangs indefinitely. Kubernetes might be unable to unschedule the node if the `PodDisruptionBudget` object has been configured to permit zero disruptions and only a single instance of the pod has been scheduled. In your spec file, the `.spec.replicas` configuration sets the total amount of replicas that are available in your app. @@ -435,10 +435,10 @@ To resolve this issue, do one of the following: When the number of replicas configured in `.spec.replicas` is greater than the number of replicas set in the `PodDisruptionBudget` object, disruptions can occur.No. Cannot create, update, and delete quotas. | ||||
List <%= vars.product_short %> plans | +List Tanzu Kubernetes Grid Integrated Edition plans | Yes. Can list all available plans. | Yes. Can list all available plans. | Yes. Can list all available plans. |
Warning: If you deactivate the default full upgrade -and upgrade only the <%= vars.k8s_runtime_abbr %> control plane, -you must upgrade all your <%= vars.k8s_runtime_abbr %>-provisioned Kubernetes clusters before the next <%= vars.product_tile %> tile +and upgrade only the TKGI control plane, +you must upgrade all your TKGI-provisioned Kubernetes clusters before the next Tanzu Kubernetes Grid Integrated Edition tile upgrade. Deactivating the default full upgrade -and upgrading only the <%= vars.k8s_runtime_abbr %> control plane cause the <%= vars.k8s_runtime_abbr %> version -tagged in your Kubernetes clusters to fall behind the <%= vars.product_tile %> tile version. -If your <%= vars.k8s_runtime_abbr %>-provisioned Kubernetes clusters fall more than one version behind the tile, -<%= vars.k8s_runtime_abbr %> cannot upgrade the clusters. +and upgrading only the TKGI control plane cause the TKGI version +tagged in your Kubernetes clusters to fall behind the Tanzu Kubernetes Grid Integrated Edition tile version. +If your TKGI-provisioned Kubernetes clusters fall more than one version behind the tile, +TKGI cannot upgrade the clusters.
<%# Note: The formatting on this page breaks when notes are configured the normal way.%>Supported Upgrade Types | ||||
---|---|---|---|---|
Full <%= vars.k8s_runtime_abbr %> upgrade | -<%= vars.k8s_runtime_abbr %> control plane only | +Full TKGI upgrade | +TKGI control plane only | Kubernetes clusters only |
<%= vars.k8s_runtime_abbr %> Tile | +TKGI Tile | ✔ | ✔ | ✔ |
<%= vars.k8s_runtime_abbr %> CLI | +TKGI CLI | ✖ | ✖ | ✔ |
Warning: If you have <%= vars.k8s_runtime_abbr %>-provisioned Windows worker clusters, - do not activate the Upgrade all clusters errand before upgrading to the <%= vars.k8s_runtime_abbr %> v1.17 tile. +
Warning: If you have TKGI-provisioned Windows worker clusters, + do not activate the Upgrade all clusters errand before upgrading to the TKGI v1.17 tile. You cannot use the Upgrade all clusters errand because you must manually migrate each individual Windows worker cluster to the CSI Driver for vSphere. For more information, see Configure vSphere CSI for Windows in Deploying and Managing Cloud Native Storage (CNS) on vSphere.
<% end %> -### <%= vars.k8s_runtime_abbr %> Control Plane Only Upgrades +### TKGI Control Plane Only Upgrades -During a **<%= vars.k8s_runtime_abbr %> control plane only** upgrade, -the <%= vars.product_tile %> tile does the following: +During a **TKGI control plane only** upgrade, +the Tanzu Kubernetes Grid Integrated Edition tile does the following: 1. **Recreates the Control Plane VMs**: - * Upgrades the <%= vars.k8s_runtime_abbr %> version on the <%= vars.k8s_runtime_abbr %> control plane. + * Upgrades the TKGI version on the TKGI control plane. * For more information, see [What Happens During Control Plane Upgrades](#control-plane-upgrades-details) below. 1. **Does Not Upgrade Clusters**: - * Does not automatically upgrade <%= vars.k8s_runtime_abbr %>-provisioned Kubernetes clusters after upgrading the <%= vars.k8s_runtime_abbr %> control plane. - * Requires the **Upgrade all clusters errand** check box is deactivated in the **Errands** pane on the <%= vars.product_tile %> tile. - * The <%= vars.k8s_runtime_abbr %>-provisioned Kubernetes clusters remain on the previous <%= vars.k8s_runtime_abbr %> version until you manually upgrade them. + * Does not automatically upgrade TKGI-provisioned Kubernetes clusters after upgrading the TKGI control plane. + * Requires the **Upgrade all clusters errand** check box is deactivated in the **Errands** pane on the Tanzu Kubernetes Grid Integrated Edition tile. + * The TKGI-provisioned Kubernetes clusters remain on the previous TKGI version until you manually upgrade them. For more information, see [What Happens During Cluster Upgrades](#cluster-upgrades) below, and [Upgrading Clusters](upgrade-clusters.html). - * Some cluster management tasks are not supported for clusters that are running the previous <%= vars.k8s_runtime_abbr %> version. - For more information, see [Tasks Supported Following a <%= vars.k8s_runtime_abbr %> Control Plane Only Upgrade](#control-plane-upgrades-supported-tasks) below. + * Some cluster management tasks are not supported for clusters that are running the previous TKGI version. + For more information, see [Tasks Supported Following a TKGI Control Plane Only Upgrade](#control-plane-upgrades-supported-tasks) below. @@ -169,60 +169,60 @@ the <%= vars.product_tile %> tile does the following:Note: <%= vars.recommended_by %> recommends you do not run <%= vars.k8s_runtime_abbr %> CLI cluster management commands on clusters running the previous <%= vars.k8s_runtime_abbr %> version. +
Note: <%= vars.recommended_by %> recommends you do not run TKGI CLI cluster management commands on clusters running the previous TKGI version.
Note: <%= vars.recommended_by %> recommends you do not run <%= vars.k8s_runtime_abbr %> CLI cluster management commands on clusters running the previous <%= vars.k8s_runtime_abbr %> version. +
Note: <%= vars.recommended_by %> recommends you do not run TKGI CLI cluster management commands on clusters running the previous TKGI version.
The Upgrade all clusters errand in - the <%= vars.product_tile %> tile > Errands |
+ the Tanzu Kubernetes Grid Integrated Edition tile > Errands
All clusters. Clusters are upgraded serially. | |||
Use this order... | For more information, see... | |||
---|---|---|---|---|
<%= vars.k8s_runtime_abbr %> | +TKGI |
|
- Upgrading to <%= vars.k8s_runtime_abbr %> <%= vars.product_version %> | +Upgrading to TKGI <%= vars.product_version %> |
<%= vars.k8s_runtime_abbr %> and NSX | +TKGI and NSX |
|
- Upgrading to <%= vars.k8s_runtime_abbr %> <%= vars.product_version %> and NSX v4.0 | +Upgrading to TKGI <%= vars.product_version %> and NSX v4.0 |
<%= vars.k8s_runtime_abbr %>, NSX, and vSphere | +TKGI, NSX, and vSphere |
|
- Upgrading to <%= vars.k8s_runtime_abbr %> <%= vars.product_version %>, NSX, and vSphere v8.0 | +Upgrading to TKGI <%= vars.product_version %>, NSX, and vSphere v8.0 |
Warning: Refer to the Release Notes for current version support, known issues, and other important information.
-In this upgrade scenario, you upgrade <%= vars.product_short %> from <%= vars.product_version_prev %> to +In this upgrade scenario, you upgrade Tanzu Kubernetes Grid Integrated Edition from <%= vars.product_version_prev %> to <%= vars.product_version %> and NSX from v3.2.3, to v4.0.1 or later. The upgrade scenario includes the following steps: 1. Upgrade NSX from v3.2.3 or later to v4.0.1 or later. -1. Upgrade <%= vars.ops_manager %> to <%= vars.ops_man_version_v3 %> or later. -These are the recommended <%= vars.ops_manager %> versions -for <%= vars.product_short %> <%= vars.product_version %>.0. -To verify <%= vars.ops_manager %> compatibility with other <%= vars.product_version %> versions, see +1. Upgrade Ops Manager to <%= vars.ops_man_version_v3 %> or later. +These are the recommended Ops Manager versions +for Tanzu Kubernetes Grid Integrated Edition <%= vars.product_version %>.0. +To verify Ops Manager compatibility with other <%= vars.product_version %> versions, see [<%= vars.product_network %>](https://network.pivotal.io/products/pivotal-container-service/). -1. Upgrade <%= vars.product_short %> from <%= vars.product_version_prev %> to <%= vars.product_version %>. +1. Upgrade Tanzu Kubernetes Grid Integrated Edition from <%= vars.product_version_prev %> to <%= vars.product_version %>. 1. If you are upgrading a cluster that uses a public cloud CSI driver, see [Limitations on Using a Public Cloud CSI Driver](release-notes.html#1-16-0-csi-driver-limits) in _Release Notes_ for additional requirements. 1. Upgrade all Kubernetes clusters to -<%= vars.product_short %> <%= vars.product_version %>. +Tanzu Kubernetes Grid Integrated Edition <%= vars.product_version %>. This upgrades the NCP version of your clusters. See the table below for version information and instructions for this @@ -190,13 +178,13 @@ upgrade scenario:Warning: Refer to the Release Notes for current version support, known issues, and other important information. @@ -228,27 +212,24 @@ upgrade scenario: In this upgrade scenario, you upgrade: -* <%= vars.product_short %> from <%= vars.product_version_prev %> to <%= vars.product_version %> +* Tanzu Kubernetes Grid Integrated Edition from <%= vars.product_version_prev %> to <%= vars.product_version %> * NSX from v3.2.3 or later to v4.0.1 or later. * vSphere from v7.0 to v8.0 The upgrade scenario includes the following steps: 1. Upgrade NSX from v3.2.3 or later, to v4.0.1. or later. -<% if vars.product_version == "COMMENTED" %> -1. If you set DRS mode to **Manual** above, restore DRS to its original setting. -<% end %> -1. Upgrade <%= vars.ops_manager %> to <%= vars.ops_man_version_v3 %> or later. -These are the recommended <%= vars.ops_manager %> versions -for <%= vars.product_short %> <%= vars.product_version %>.0. -To verify <%= vars.ops_manager %> compatibility with other <%= vars.product_version %> versions, see +1. Upgrade Ops Manager to <%= vars.ops_man_version_v3 %> or later. +These are the recommended Ops Manager versions +for Tanzu Kubernetes Grid Integrated Edition <%= vars.product_version %>.0. +To verify Ops Manager compatibility with other <%= vars.product_version %> versions, see [<%= vars.product_network %>](https://network.pivotal.io/products/pivotal-container-service/). -1. Upgrade <%= vars.product_short %> from <%= vars.product_version_prev %> to <%= vars.product_version %>. +1. Upgrade Tanzu Kubernetes Grid Integrated Edition from <%= vars.product_version_prev %> to <%= vars.product_version %>. 1. If you are upgrading a cluster that uses a public cloud CSI driver, see [Limitations on Using a Public Cloud CSI Driver](release-notes.html#1-15-0-csi-driver-limits) in _Release Notes_ for additional requirements. 1. Upgrade all Kubernetes clusters to -<%= vars.product_short %> <%= vars.product_version %>. +Tanzu Kubernetes Grid Integrated Edition <%= vars.product_version %>. This upgrades the NCP version of your clusters. 1. Upgrade vSphere from v7.0 to v8.0. @@ -265,13 +246,13 @@ upgrade scenario:
Warning: Do not manually upgrade your Kubernetes version. -<%= vars.product_short %> includes the compatible Kubernetes version. +Tanzu Kubernetes Grid Integrated Edition includes the compatible Kubernetes version.
## Overview @@ -22,227 +22,227 @@ to plan and prepare your upgrade. After you complete the preparation steps, continue to the procedures in [Perform the Upgrade](#upgrade) below. -These steps guide you through the process of upgrading <%= vars.ops_manager_full %> (<%= vars.ops_manager %>) and the <%= vars.product_tile %> tile, +These steps guide you through the process of upgrading VMware Tanzu Operations Manager (Ops Manager) and the Tanzu Kubernetes Grid Integrated Edition tile, importing a new stemcell, and applying the changes to your deployment. After you complete the upgrade, follow the procedures in [After the Upgrade](#after-upgrade) below -to verify that your upgraded <%= vars.product_short %> deployment is running properly. +to verify that your upgraded Tanzu Kubernetes Grid Integrated Edition deployment is running properly. ## Prepare to Upgrade If you have not already, complete all of the steps in -[Upgrade Preparation Checklist for <%= vars.product_short %>](checklist.html). +[Upgrade Preparation Checklist for Tanzu Kubernetes Grid Integrated Edition](checklist.html). <% if vars.product_version == "v1.16" %>Note:
Kubernetes v1.25 does not serve the policy/v1beta1 PodSecurityPolicy
API.
-You must replace PodSecurityPolicy
configurations with PSA before upgrading to <%= vars.k8s_runtime_abbr %> v1.16.
+You must replace PodSecurityPolicy
configurations with PSA before upgrading to TKGI v1.16.
Warning: If you use an automated pipeline to upgrade <%= vars.k8s_runtime_abbr %>,
-see Configure Automated <%= vars.ops_manager %> and
+ Warning: If you use an automated pipeline to upgrade TKGI,
+see Configure Automated Ops Manager and
Ubuntu Jammy Stemcell for VMware Tanzu Downloading in Configuring the Upgrade Pipeline.
<%# when editing this edit the other duplicate BELOW in this topic < %= partial 'add-clusters-workloads' % > #%>
-### Download and Import <%= vars.product_short %> <%= vars.product_version %>
+### Download and Import Tanzu Kubernetes Grid Integrated Edition <%= vars.product_version %>
-When you upgrade <%= vars.product_short %>,
+When you upgrade Tanzu Kubernetes Grid Integrated Edition,
your configuration settings typically migrate to the new version automatically.
-To download and import a <%= vars.product_short %> version:
+To download and import a Tanzu Kubernetes Grid Integrated Edition version:
1. Download the desired version of the product
from [<%= vars.product_network %>](https://network.pivotal.io/products/pivotal-container-service/).
-1. Navigate to the <%= vars.ops_manager %> Installation Dashboard and click **Import a Product**
+1. Navigate to the Ops Manager Installation Dashboard and click **Import a Product**
to upload the product file.
-1. Under the **Import a Product** button, click **+** next to **<%= vars.product_tile %>**.
+1. Under the **Import a Product** button, click **+** next to **Tanzu Kubernetes Grid Integrated Edition**.
This adds the tile to your staging area.
### Download and Import Stemcells
-<%= vars.k8s_runtime_abbr %> requires an Ubuntu Jammy Stemcell for VMware Tanzu.
+TKGI requires an Ubuntu Jammy Stemcell for VMware Tanzu.
A Windows 2019 Windows Stemcell for VMware Tanzu is also required if you intend to create Windows worker-based clusters.
For information about Windows stemcells, see
[Configuring Windows Worker-Based Clusters](windows-workers.html).
-
Warning: If you use an automated pipeline to upgrade <%= vars.k8s_runtime_abbr %>,
-see Configure Automated <%= vars.ops_manager %>
+ Warning: If you use an automated pipeline to upgrade TKGI,
+see Configure Automated Ops Manager
and Ubuntu Jammy Stemcell Downloading in Configuring the Upgrade Pipeline.
-1. In the **Stemcell Library**, locate the **<%= vars.product_tile %>** tile and note the required stemcell version.
+1. In the **Stemcell Library**, locate the **Tanzu Kubernetes Grid Integrated Edition** tile and note the required stemcell version.
1. Navigate to the [Stemcells (Ubuntu Jammy)](https://network.pivotal.io/products/stemcells-ubuntu-jammy/) page on <%= vars.product_network %>
and download the required stemcell version for your IaaS.
-1. Return to the **Installation Dashboard** in <%= vars.ops_manager %> and click **Stemcell Library**.
+1. Return to the **Installation Dashboard** in Ops Manager and click **Stemcell Library**.
1. On the **Stemcell Library** page, click **Import Stemcell** and select the stemcell file you downloaded from <%= vars.product_network %>.
-1. Select the <%= vars.product_tile %> tile and click **Apply Stemcell to Products**.
+1. Select the Tanzu Kubernetes Grid Integrated Edition tile and click **Apply Stemcell to Products**.
-1. Verify that <%= vars.ops_manager %> successfully applied the stemcell. The stemcell version you imported and applied appears in the **Staged** column for <%= vars.product_tile %>.
+1. Verify that Ops Manager successfully applied the stemcell. The stemcell version you imported and applied appears in the **Staged** column for Tanzu Kubernetes Grid Integrated Edition.
1. Return to the **Installation Dashboard**.
### Modify Container Network Interface Configuration
-<%= vars.product_short %> supports using the Antrea Container Network Interface (CNI) as
-the CNI for new <%= vars.k8s_runtime_abbr %>-provisioned clusters.
+Tanzu Kubernetes Grid Integrated Edition supports using the Antrea Container Network Interface (CNI) as
+the CNI for new TKGI-provisioned clusters.
-To configure <%= vars.product_short %> to use Antrea as the CNI for new clusters:
+To configure Tanzu Kubernetes Grid Integrated Edition to use Antrea as the CNI for new clusters:
1. In the **Installation Dashboard**, click **Networking**.
1. Under **Container Networking Interface**, select **Antrea**.
1. Confirm the remaining Container Networking Interface settings.
1. Click **Save**.
-For more information about <%= vars.product_short %> support for Antrea and Flannel CNIs, see [About Switching from the Flannel CNI to the Antrea CNI](understanding-upgrades.html#cni)
+For more information about Tanzu Kubernetes Grid Integrated Edition support for Antrea and Flannel CNIs, see [About Switching from the Flannel CNI to the Antrea CNI](understanding-upgrades.html#cni)
in _About Tanzu Kubernetes Grid Integrated Edition Upgrades_.
### Verify Errand Configuration
To verify your **Errands** pane is correctly configured, do the following:
-1. In the **<%= vars.product_tile %>** tile, click **Errands**.
+1. In the **Tanzu Kubernetes Grid Integrated Edition** tile, click **Errands**.
1. Under **Post-Deploy Errands**:
* Review the **Upgrade all clusters** errand:
- * If you want to upgrade the <%= vars.product_tile %> tile and all your existing Kubernetes clusters simultaneously,
+ * If you want to upgrade the Tanzu Kubernetes Grid Integrated Edition tile and all your existing Kubernetes clusters simultaneously,
confirm that **Upgrade all clusters errand** is set to **Default (On)**.
The errand upgrades all clusters.
- Upgrading <%= vars.product_short %>-provisioned Kubernetes clusters can temporarily interrupt the service
+ Upgrading Tanzu Kubernetes Grid Integrated Edition-provisioned Kubernetes clusters can temporarily interrupt the service
as described in [Service Interruptions](interruptions.html).
<% if vars.product_version == "v1.17" %>
-
Warning: If you have <%= vars.k8s_runtime_abbr %>-provisioned Windows worker clusters, do not activate the Upgrade all clusters errand before upgrading to the <%= vars.k8s_runtime_abbr %> v1.17 tile. +
Warning: If you have TKGI-provisioned Windows worker clusters, do not activate the Upgrade all clusters errand before upgrading to the TKGI v1.17 tile. You cannot use the Upgrade all clusters errand because you must manually migrate each individual Windows worker cluster to the CSI Driver for vSphere. For more information, see Configure vSphere CSI for Windows in Deploying and Managing Cloud Native Storage (CNS) on vSphere.
<% end %> - * If you want to upgrade the <%= vars.product_tile %> tile only and + * If you want to upgrade the Tanzu Kubernetes Grid Integrated Edition tile only and then upgrade your existing Kubernetes clusters separately, deactivate **Upgrade all clusters errand**. For more information, see [Upgrading Clusters](upgrade-clusters.html).Warning: Deactivating the Upgrade all clusters errand - causes the <%= vars.k8s_runtime_abbr %> version tagged in your Kubernetes clusters to fall behind - the <%= vars.product_tile %> tile version. + causes the TKGI version tagged in your Kubernetes clusters to fall behind + the Tanzu Kubernetes Grid Integrated Edition tile version. If you deactivate the Upgrade all clusters errand - when upgrading the <%= vars.product_tile %> tile, - you must upgrade all your Kubernetes clusters before the next <%= vars.product_short %> + when upgrading the Tanzu Kubernetes Grid Integrated Edition tile, + you must upgrade all your Kubernetes clusters before the next Tanzu Kubernetes Grid Integrated Edition upgrade.
* Configure the **Run smoke tests** errand: * Set the **Run smoke tests** errand to **On**. - The errand uses the <%= vars.product_short %> Command Line Interface (<%= vars.k8s_runtime_abbr %> CLI) to create a + The errand uses the Tanzu Kubernetes Grid Integrated Edition Command Line Interface (TKGI CLI) to create a Kubernetes cluster and then delete it. If the creation or deletion fails, the errand fails and - the installation of the <%= vars.product_tile %> tile is aborted. + the installation of the Tanzu Kubernetes Grid Integrated Edition tile is aborted. 1. Click **Save**. ### Verify Other Configurations -To confirm your other **<%= vars.product_tile %>** tile panes are correctly configured, do the following: +To confirm your other **Tanzu Kubernetes Grid Integrated Edition** tile panes are correctly configured, do the following: 1. Review the **Assign AZs and Networks** pane. -Note: When you upgrade <%= vars.product_short %>, you must place singleton jobs in the AZ you selected when you first installed the <%= vars.product_tile %> tile. You cannot move singleton jobs to another AZ.
+Note: When you upgrade Tanzu Kubernetes Grid Integrated Edition, you must place singleton jobs in the AZ you selected when you first installed the Tanzu Kubernetes Grid Integrated Edition tile. You cannot move singleton jobs to another AZ.
1. Review the other configuration panes. 1. Make changes where necessary.WARNING: Do not change the number of control plane/etcd nodes for any plan that was used to create currently-running clusters. -<%= vars.product_short %> does not support changing the number of control plane/etcd nodes for plans +Tanzu Kubernetes Grid Integrated Edition does not support changing the number of control plane/etcd nodes for plans with existing clusters.
1. Click **Save** on any panes where you make changes. -### Apply Changes to the <%= vars.product_tile %> Tile +### Apply Changes to the Tanzu Kubernetes Grid Integrated Edition Tile -To complete the upgrade of the <%= vars.product_tile %> tile: +To complete the upgrade of the Tanzu Kubernetes Grid Integrated Edition tile: -1. Return to the **Installation Dashboard** in <%= vars.ops_manager %>. +1. Return to the **Installation Dashboard** in Ops Manager. 1. Click **Review Pending Changes**. - For more information about this <%= vars.ops_manager %> page, see + For more information about this Ops Manager page, see [Reviewing Pending Product Changes](https://docs.vmware.com/en/VMware-Tanzu-Operations-Manager/3.0/vmware-tanzu-ops-manager/install-review-pending-changes.html). 1. Click **Apply Changes**. 1. (Optional) To monitor the progress of the **Upgrade all clusters errand** using the BOSH CLI, do the following: - 1. Log in to the BOSH Director by running `bosh -e MY-ENVIRONMENT log-in` from a VM that can access your <%= vars.product_short %> deployment. For more information, see [Using BOSH Diagnostic Commands in <%= vars.product_short %>](diagnostic-tools.html). + 1. Log in to the BOSH Director by running `bosh -e MY-ENVIRONMENT log-in` from a VM that can access your Tanzu Kubernetes Grid Integrated Edition deployment. For more information, see [Using BOSH Diagnostic Commands in Tanzu Kubernetes Grid Integrated Edition](diagnostic-tools.html). 1. Run `bosh -e MY-ENVIRONMENT tasks`. 1. Locate the task number for the errand in the # column of the BOSH output. 1. Run `bosh task TASK-NUMBER`, replacing `TASK-NUMBER` with the task number you located in the previous step. ## After the Upgrade -After you complete the upgrade to <%= vars.product_short %> <%= vars.product_version %>, +After you complete the upgrade to Tanzu Kubernetes Grid Integrated Edition <%= vars.product_version %>, complete the following verifications and upgrades: -- [Upgrade the <%= vars.k8s_runtime_abbr %> and Kubernetes CLIs](#upgrade-clis) +- [Upgrade the TKGI and Kubernetes CLIs](#upgrade-clis) - [Verify the Upgrade](#verify-upgrade) -### Upgrade the <%= vars.k8s_runtime_abbr %> and Kubernetes CLIs +### Upgrade the TKGI and Kubernetes CLIs -Upgrade the <%= vars.k8s_runtime_abbr %> and Kubernetes CLIs on any local machine -where you run commands that interact with your upgraded version of <%= vars.product_short %>. +Upgrade the TKGI and Kubernetes CLIs on any local machine +where you run commands that interact with your upgraded version of Tanzu Kubernetes Grid Integrated Edition. -To upgrade the CLIs, download and re-install the <%= vars.k8s_runtime_abbr %> and Kubernetes CLI distributions -that are provided with <%= vars.product_short %> on <%= vars.product_network %>. +To upgrade the CLIs, download and re-install the TKGI and Kubernetes CLI distributions +that are provided with Tanzu Kubernetes Grid Integrated Edition on <%= vars.product_network %>. For more information about installing the CLIs, see the following topics: -* [Installing the <%= vars.k8s_runtime_abbr %> CLI](installing-cli.html) +* [Installing the TKGI CLI](installing-cli.html) * [Installing the Kubernetes CLI](installing-kubectl-cli.html) ### Verify the Upgrade -After you apply changes to the <%= vars.product_tile %> tile and the upgrade is complete, +After you apply changes to the Tanzu Kubernetes Grid Integrated Edition tile and the upgrade is complete, do the following: 1. Verify that your Kubernetes environment is healthy. To verify the health of your Kubernetes environment, see [Verifying @@ -253,12 +253,12 @@ Deployment Health](./verify-health.html). see [Retrieve Cluster Upgrade Task ID](./verify-health.html#upgrade-code) in _Verifying Deployment Health_. <%# when editing this edit the other duplicate ABOVE in this topic < %= partial 'add-clusters-workloads' % > #%> -1. Verify that the <%= vars.product_short %> control plane remains functional by performing the following steps: +1. Verify that the Tanzu Kubernetes Grid Integrated Edition control plane remains functional by performing the following steps: 1. Add more workloads and create an additional cluster. For more information, see About Cluster Upgrades in _Maintaining Workload Uptime_ and Creating Clusters. - 1. Monitor the <%= vars.product_short %> control plane in the <%= vars.product_tile %> tile > Status tab. - Review the load and resource usage data for the <%= vars.control_plane %> and <%= vars.control_plane_db %> VMs. + 1. Monitor the Tanzu Kubernetes Grid Integrated Edition control plane in the Tanzu Kubernetes Grid Integrated Edition tile > Status tab. + Review the load and resource usage data for the TKGI API and TKGI Database VMs. If any levels are at capacity, scale up the VMs.Note: You must use the Velero binary signed by VMware to be eligible for support from VMware.
### Install the Velero CLI -To install the Velero CLI on the <%= vars.k8s_runtime_abbr %> client or on your local machine: +To install the Velero CLI on the TKGI client or on your local machine: 1. Open a command line and change directory to the Velero CLI download. 1. Unzip the download file: @@ -291,9 +291,6 @@ To install Velero: For example: ```console - <% if vars.product_version == "COMMENTED" %> - $ velero install --provider aws --plugins projects.registry.vmware.com/tkg/velero/velero-plugin-for-aws-<%= vars.velero_version_aws %>_vmware.1 \ -<% end %> $ velero install --image projects.registry.vmware.com/tkg/velero/velero:<%= vars.velero_version %>_vmware.1 --provider aws --plugins projects.registry.vmware.com/tkg/velero/velero-plugin-for-aws-<%= vars.velero_version_aws %>_vmware.1 \ --bucket tkgi-velero --secret-file ./credentials-minio --use-volume-snapshots=false \ --default-volumes-to-fs-backup \ @@ -356,7 +353,7 @@ To install Velero: ### Modify the Host Path -To run the three-pod node-agent DaemonSet on a Kubernetes cluster in <%= vars.k8s_runtime_abbr %>, +To run the three-pod node-agent DaemonSet on a Kubernetes cluster in TKGI, you must modify the node-agent DaemonSet spec and modify the `hostpath` property. To modify the node-agent DaemonSet: @@ -456,13 +453,13 @@ in the Velero documentation. - A private container registry is installed and configured. The instructions use Harbor. -- Docker is installed on the workstation or <%= vars.k8s_runtime_abbr %> jump host. +- Docker is installed on the workstation or TKGI jump host. - kubectl context has been set and the MinIO `credentials-minio` file exists. For more information, see [Set Up the kubectl Context ](#velero-cluster-setup) above. ### Procedure -1. Open the VMware Velero downloads page for your version of <%= vars.k8s_runtime_abbr %> linked to from the _Product Snapshot_ of the [Release Notes](release-notes.html). -1. Download the Velero CLI and Velero with restic Docker images for your version of <%= vars.k8s_runtime_abbr %>: +1. Open the VMware Velero downloads page for your version of TKGI linked to from the _Product Snapshot_ of the [Release Notes](release-notes.html). +1. Download the Velero CLI and Velero with restic Docker images for your version of TKGI: - velero-<%= vars.velero_version %>+vmware.1.gz - velero-plugin-for-aws-<%= vars.velero_version_aws %>_vmware.1.tar.gz - velero-restic-restore-helper-<%= vars.velero_version %>+vmware.1.tar.gz diff --git a/velero-stateful-ingress.html.md.erb b/velero-stateful-ingress.html.md.erb index c8fa70ce9..8b3bd56f7 100644 --- a/velero-stateful-ingress.html.md.erb +++ b/velero-stateful-ingress.html.md.erb @@ -25,7 +25,7 @@ To demonstrate backing up and restoring a stateful application: Before starting your Velero demonstration, you need to: -* Have a <%= vars.k8s_runtime_abbr %> Kubernetes cluster with static IP set from a floating IP pool. +* Have a TKGI Kubernetes cluster with static IP set from a floating IP pool. * MinIO and Velero have been installed. For more information, see [Installing Velero with File System Backup](./velero-install.html). * Download the [Coffee-Tea app YAML files](https://github.com/pivotal-cf/docs-pks/tree/<%= vars.product_version_raw %>/demos/cafe-app) to a local known directory: @@ -64,9 +64,9 @@ To create and apply a network profile for DNS lookup of the Kubernetes API serve Where `INGRESS-SUBDOMAIN` is the ingress subdomain prefix.Note: If there are multiple control plane nodes, all control plane node VMs are the same size. To configure the number of control plane nodes, - see the Plans section of Installing <%= vars.product_short %> for your IaaS. + see the Plans section of Installing Tanzu Kubernetes Grid Integrated Edition for your IaaS.
To customize the size of the Kubernetes control plane node VM, @@ -85,7 +85,7 @@ We recommend that you increase this value to account for failures and upgrades. For example, increase the number of worker nodes by at least one to maintain workload uptime during an upgrade. Additionally, increase the number of worker nodes to fit your own failure tolerance criteria. -The maximum number of worker nodes that you can create for a plan in an <%= vars.product_short %>-provisioned Kubernetes cluster is set by the **Maximum number of workers on a cluster** field in the **Plans** pane of the <%= vars.product_tile %> tile. To customize the size of the Kubernetes worker node VM, see [Customize Control Plane and Worker Node VM Size and Type](#node-sizing-custom). +The maximum number of worker nodes that you can create for a plan in an Tanzu Kubernetes Grid Integrated Edition-provisioned Kubernetes cluster is set by the **Maximum number of workers on a cluster** field in the **Plans** pane of the Tanzu Kubernetes Grid Integrated Edition tile. To customize the size of the Kubernetes worker node VM, see [Customize Control Plane and Worker Node VM Size and Type](#node-sizing-custom). ### Example Worker Node Requirement Calculation @@ -123,9 +123,9 @@ In total, this app workload requires 13 workers with 10 CPUs and 100 GB RAM ## Customize Control Plane and Worker Node VM Size and Type You select the CPU, memory, and disk space for the Kubernetes node VMs from -a set list in the <%= vars.product_tile %> tile. Control Plane and worker node VM sizes and types are selected on a per-plan -basis. For more information, see the Plans section of the <%= vars.product_short %> installation topic -for your IaaS. For example, [Installing <%= vars.product_short %> on vSphere with NSX](./installing-nsx-t.html#plans). +a set list in the Tanzu Kubernetes Grid Integrated Edition tile. Control Plane and worker node VM sizes and types are selected on a per-plan +basis. For more information, see the Plans section of the Tanzu Kubernetes Grid Integrated Edition installation topic +for your IaaS. For example, [Installing Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX](./installing-nsx-t.html#plans). While the list of available node VM types and sizes is extensive, the list may not provide the exact type and size of VM that you want. You can use the Ops Manager diff --git a/volumes.html.md.erb b/volumes.html.md.erb index b005dc776..d7ee76625 100644 --- a/volumes.html.md.erb +++ b/volumes.html.md.erb @@ -3,7 +3,7 @@ title: Configuring and Using PersistentVolumes owner: TKGI --- -This topic describes how to provision static and dynamic PersistentVolumes (PVs) to run stateful apps using <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>). +This topic describes how to provision static and dynamic PersistentVolumes (PVs) to run stateful apps using VMware Tanzu Kubernetes Grid Integrated Edition (TKGI). For static PV provisioning, the PersistentVolumeClaim (PVC) does not need to reference a StorageClass. For dynamic PV provisioning, you must specify a StorageClass and define the PVC using a reference to that StorageClass. @@ -190,7 +190,7 @@ Dynamic PV provisioning gives developers the freedom to provision storage when t For dynamic PV provisioning, the procedure is to define and create a PVC that automatically triggers the creation of the PV and its backend VMDK file. When the PV is created, Kubernetes knows which volume instance is available for use. When a PVC or volumeClaimTemplate is requested, Kubernetes chooses an available PV and allocates it to the Deployment or StatefulSets workload. -<%= vars.product_short %> supports dynamic PV provisioning by providing StorageClasses for all supported cloud providers, as well as an example PVC. +Tanzu Kubernetes Grid Integrated Edition supports dynamic PV provisioning by providing StorageClasses for all supported cloud providers, as well as an example PVC.Note: For dynamic PVs on vSphere, you must create or map the VMDK file for the StorageClass on a shared file system datastore. This shared file system datastore must be accessible to each vSphere cluster where Kubernetes cluster nodes run. For more information, see PersistentVolume Storage Options on vSphere.
@@ -305,7 +305,7 @@ provisioner: kubernetes.io/vsphere-volumeNote: The above example uses the vSphere provisioner. Refer to the Kubernetes documentation for information about provisioners for other cloud providers.
-### Provision Dynamic PVs for Use with <%= vars.product_short %> +### Provision Dynamic PVs for Use with Tanzu Kubernetes Grid Integrated Edition Perform the steps in this section to register one or more StorageClasses and define a PVC that can be applied to newly-created pods. diff --git a/vsphere-cns-manual.html.md.erb b/vsphere-cns-manual.html.md.erb index 8bdfe453c..b11236cbc 100644 --- a/vsphere-cns-manual.html.md.erb +++ b/vsphere-cns-manual.html.md.erb @@ -3,12 +3,12 @@ title: Manually Installing the vSphere CSI Driver owner: TKGI --- -This topic explains how to manually integrate Cloud Native Storage (CNS) with <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>) on vSphere using the vSphere Container Storage Interface (CSI) driver. -This integration enables <%= vars.k8s_runtime_abbr %> clusters to use external container storage. +This topic explains how to manually integrate Cloud Native Storage (CNS) with VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) on vSphere using the vSphere Container Storage Interface (CSI) driver. +This integration enables TKGI clusters to use external container storage. This topic provides procedures for installing CSI on a TKGI cluster, verifying the installation and resizing PersistentVolumes. -Note: CSI can only be installed on a Linux <%= vars.k8s_runtime_abbr %> cluster.
+Note: CSI can only be installed on a Linux TKGI cluster.
## Overview @@ -23,8 +23,8 @@ For more information, see [Getting Started with VMware Cloud Native Storage](htt To create PersistentVolumes using CNS on vSphere, see: -* [Prerequisites for using the vSphere CSI Driver with <%= vars.k8s_runtime_abbr %>](#prereq) -* [Install the vSphere CSI Driver on a <%= vars.k8s_runtime_abbr %> Cluster](#manual) +* [Prerequisites for using the vSphere CSI Driver with TKGI](#prereq) +* [Install the vSphere CSI Driver on a TKGI Cluster](#manual) * [Create a vSphere Storage Class](#create-storage)Note: VMware recommends using vSphere CSI Driver v2.4 or later with <%= vars.k8s_runtime_abbr %> v1.13. +
Note: VMware recommends using vSphere CSI Driver v2.4 or later with TKGI v1.13.
For instructions on how to upgrade the vSphere CSI Driver version, see - [Upgrade the vSphere CSI Driver on a <%= vars.k8s_runtime_abbr %> Cluster](#manual-upgrade) below. + [Upgrade the vSphere CSI Driver on a TKGI Cluster](#manual-upgrade) below. * **The vSphere CSI Driver requirements have been met:** For more information, see [Preparing for Installation of vSphere Container Storage Plug-in](https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/3.0/vmware-vsphere-csp-getting-started/GUID-0AB6E692-AA47-4B6A-8CEA-38B754E16567.html). -* **Plan Configuration:** The **Allow Privileged** setting must be enabled in the <%= vars.k8s_runtime_abbr %> tile for the plans you use with the vSphere CSI Driver. - To enable this setting, see [Installing <%= vars.k8s_runtime_abbr %> on vSphere](installing-vsphere.html#plans). -<% if vars.product_version == "COMMENTED" %> -<%#WARNING: Update the configuration file only on a <%= vars.k8s_runtime_abbr %> cluster that has been upgraded to the current <%= vars.k8s_runtime_abbr %> version. For more information, see Tasks Supported Following a <%= vars.k8s_runtime_abbr %> Control Plane Upgrade in About <%= vars.product_short %> Upgrades. +
WARNING: Update the configuration file only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
WARNING: Update the configuration file only on a <%= vars.k8s_runtime_abbr %> cluster that has been upgraded to the current <%= vars.k8s_runtime_abbr %> version. For more information, see Tasks Supported Following a <%= vars.k8s_runtime_abbr %> Control Plane Upgrade in About <%= vars.product_short %> Upgrades. +
WARNING: Update the configuration file only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
@@ -444,11 +440,6 @@ To create a Persistent Volume using the vSphere CSI Driver: 1. Create the PersistentVolumeClaim configuration for the file volume. For information about configuring a PVC, see [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) in the Kubernetes documentation.Note: You cannot add topology-aware volume provisioning to an existing cluster within <%= vars.k8s_runtime_abbr %>. +
Note: You cannot add topology-aware volume provisioning to an existing cluster within TKGI.
Note: vSphere CSI driver support for Windows worker nodes is in Alpha.
@@ -670,7 +655,7 @@ You can use the vSphere CSI Driver with <%= vars.k8s_runtime_abbr %> Windows worWarning: If you have <%= vars.k8s_runtime_abbr %>-provisioned Windows worker clusters, do not activate the Upgrade all clusters errand before upgrading to the <%= vars.k8s_runtime_abbr %> v1.17 tile. +
Warning: If you have TKGI-provisioned Windows worker clusters, do not activate the Upgrade all clusters errand before upgrading to the TKGI v1.17 tile. You cannot use the Upgrade all clusters errand because you must manually migrate each individual Windows worker cluster to the CSI Driver for vSphere
<% end %> @@ -729,13 +714,13 @@ For more information, see [Introduction](https://github.com/kubernetes-sigs/vsphWarning: If you have <%= vars.k8s_runtime_abbr %>-provisioned Windows worker clusters, do not activate the Upgrade all clusters errand before upgrading to the <%= vars.k8s_runtime_abbr %> v1.17 tile. +1. Upgrade the TKGI tile to TKGI v1.17 with the **Upgrade all clusters** errand deactivated. +
Warning: If you have TKGI-provisioned Windows worker clusters, do not activate the Upgrade all clusters errand before upgrading to the TKGI v1.17 tile. You cannot use the Upgrade all clusters errand because you must manually migrate each individual Windows worker cluster to the CSI Driver for vSphere.
1. [Prepare a Windows Stemcell for vSphere CSI](#windows-prepare-stemcell). -1. Upgrade each <%= vars.k8s_runtime_abbr %>-provisioned Windows worker cluster individually: +1. Upgrade each TKGI-provisioned Windows worker cluster individually: 1. [Prepare vSphere CSI for a Windows Cluster](#windows-prepare-windows-cluster). 1. Complete the cluster upgrade prerequisites. For more information, see [Prerequisites](upgrade-clusters.html#prerequisites) in _Upgrading Clusters_. 1. Upgrade the cluster. For more information, see [Upgrade a Single Cluster](upgrade-clusters.html#upgrade-cluster) in _Upgrading Clusters_. @@ -801,7 +786,7 @@ To test your Windows stemcell:WARNING: Update the configuration file only on a <%= vars.k8s_runtime_abbr %> cluster that has been upgraded to the current <%= vars.k8s_runtime_abbr %> version. For more information, see Tasks Supported Following a <%= vars.k8s_runtime_abbr %> Control Plane Upgrade in About <%= vars.product_short %> Upgrades. +
WARNING: Update the configuration file only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
<% end %> @@ -942,7 +927,7 @@ To configure or manage vSphere CSI on a Windows cluster: You can limit the number of persistent volumes attached to a Linux cluster node on vSphere. You can configure the maximum number of node persistent volumes on an existing cluster and during cluster creation. -By default, <%= vars.k8s_runtime_abbr %> configures Linux clusters on vSphere with a maximum of 45 attached persistent volumes. +By default, TKGI configures Linux clusters on vSphere with a maximum of 45 attached persistent volumes. You can decrease the maximum number of attached persistent volumes from 45 down to a minimum of 1. On vSphere 8 you can also increase the maximum number of attached persistent volumes. Contact VMware Support to determine the maximum number of attached persistent volumes supported by your vSphere environment. @@ -1076,7 +1061,7 @@ To create a new cluster or update an existing cluster with the new snapshot conf ```console tkgi update-cluster demo --config-file ./snapshot.json ``` -WARNING: Update the configuration file only on a <%= vars.k8s_runtime_abbr %> cluster that has been upgraded to the current <%= vars.k8s_runtime_abbr %> version. For more information, see Tasks Supported Following a <%= vars.k8s_runtime_abbr %> Control Plane Upgrade in About <%= vars.product_short %> Upgrades. +
WARNING: Update the configuration file only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
For more information on volume snapshots, see @@ -1107,7 +1092,7 @@ To configure CNS data centers for a multi-data center environment: Where: * `DATA-CENTER-LIST` is a comma-separated list of vCenter data centers that must mount your CNS storage. - The default data center for a cluster is the data center defined on the <%= vars.k8s_runtime_abbr %> tile + The default data center for a cluster is the data center defined on the TKGI tile in **Kubernetes Cloud Provider** > **Datacenter Name**. For example: @@ -1121,11 +1106,6 @@ To configure CNS data centers for a multi-data center environment: see the description of `datacenters` in [Procedure](https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/3.0/vmware-vsphere-csp-getting-started/GUID-BFF39F1D-F70A-4360-ABC9-85BDAFBE8864.html#procedure-1) in _Create a Kubernetes Secret for vSphere Container Storage Plug-in_. - - <% if vars.product_version == "COMMENTED" %> - `"disable-vsphere-csi"`: * `DEACTIVATE-CSI` (Optional) is a toggle to deactivate vSphere CSI Driver support. - Accepts Boolean values `"false"` and `"true"`. Default is `"false"`. - <% end %> 1. To create a new cluster or update an existing cluster with your vCenter data centers: @@ -1158,12 +1138,12 @@ To configure CNS data centers for a multi-data center environment: * `CLUSTER-NAME` is the name of your cluster. * `CONFIG-FILE` is the name of your configuration file. -WARNING: Update the configuration file only on a <%= vars.k8s_runtime_abbr %> cluster that has been upgraded to the current <%= vars.k8s_runtime_abbr %> version. For more information, see Tasks Supported Following a <%= vars.k8s_runtime_abbr %> Control Plane Upgrade in About <%= vars.product_short %> Upgrades. +
WARNING: Update the configuration file only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
Note: The recommended method for installing <%= vars.product_short %> on vSphere is to use the <%= vars.product_short %> Management Console. For information, see Install on vSphere with the Management Console.
+Note: The recommended method for installing Tanzu Kubernetes Grid Integrated Edition on vSphere is to use the Tanzu Kubernetes Grid Integrated Edition Management Console. For information, see Install on vSphere with the Management Console.
-To install <%= vars.product_short %> on vSphere with Flannel networking follow the instructions below: +To install Tanzu Kubernetes Grid Integrated Edition on vSphere with Flannel networking follow the instructions below:Note: For production clusters, three control plane nodes are required, and a minimum of three worker nodes are required. See Requirements for <%= vars.product_short %> on vSphere with NSX for more information.
+Note: For production clusters, three control plane nodes are required, and a minimum of three worker nodes are required. See Requirements for Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX for more information.
## NSX Logical Switches -When a new Kubernetes cluster is created, <%= vars.product_short %> creates the following [NSX logical switches](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.admin.doc/GUID-F89C1C1F-A270-4FC9-A1CF-CB90545FB636.html): +When a new Kubernetes cluster is created, Tanzu Kubernetes Grid Integrated Edition creates the following [NSX logical switches](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.admin.doc/GUID-F89C1C1F-A270-4FC9-A1CF-CB90545FB636.html):Infrastructure Network |
@@ -250,9 +250,9 @@ To configure the AZs and the Network for BOSH Director:
<%= image_tag("images/nsxt/bosh/config-bosh-18.png", :alt => "TKGI tile Assign AZs and Networks tab default configuration") %>
-1. Use the drop-down menu to select a **Singleton Availability Zone**. The Ops Manager Director installs in this Availability Zone. For <%= vars.product_short %>, this will be the `AZ-MGMT` availability zone.
+1. Use the drop-down menu to select a **Singleton Availability Zone**. The Ops Manager Director installs in this Availability Zone. For Tanzu Kubernetes Grid Integrated Edition, this will be the `AZ-MGMT` availability zone.
-1. Use the drop-down menu to select a **Network** for BOSH Director. BOSH Director runs on the <%= vars.product_short %> Management Plane network. Select the `NST-MGTM-TKGI` network.
+1. Use the drop-down menu to select a **Network** for BOSH Director. BOSH Director runs on the Tanzu Kubernetes Grid Integrated Edition Management Plane network. Select the `NST-MGTM-TKGI` network.
1. Click **Save**.
@@ -268,7 +268,7 @@ To configure a BOSH Director certificate and password:
If you are using self-signed CAs for the infrastructure components (NSX, vCenter), you need to add every CA of every component your deployment might connect to. In other words, the bundle must include all certificates for any component that connects to or from BOSH.
- If you are using a private Docker registry, such as VMware Harbor, use this field to enter the certificate for the registry. See [Integrating Harbor Registry with <%= vars.product_short %>](https://docs.vmware.com/en/VMware-Harbor-Registry/services/vmware-harbor-registry/GUID-integrating-pks.html) for details.
+ If you are using a private Docker registry, such as VMware Harbor, use this field to enter the certificate for the registry. See [Integrating Harbor Registry with Tanzu Kubernetes Grid Integrated Edition](https://docs.vmware.com/en/VMware-Harbor-Registry/services/vmware-harbor-registry/GUID-integrating-pks.html) for details.
1. Choose **Generate passwords** or **Use default BOSH password**. Use the **Generate passwords** option for increased security.
@@ -351,13 +351,13 @@ To deploy BOSH:
<%= image_tag("images/nsxt/bosh/config-bosh-23.png", :alt => "Ops Manager UI Apply Changes - Changes Applied notification") %>
-1. Check BOSH VM. Log in to vCenter and check for the `p-bosh` VM deployment in the <%= vars.product_short %> Management resource pool.
+1. Check BOSH VM. Log in to vCenter and check for the `p-bosh` VM deployment in the Tanzu Kubernetes Grid Integrated Edition Management resource pool.
<%= image_tag("images/nsxt/bosh/config-bosh-24.png", :alt => "vCenter UI p-bosh VM deployment configuration") %>
## Step 13: Update Network Availability Zones
-After successfully deploying BOSH, ensure that both the Management AZ and the Compute AZs appear in the <%= vars.product_tile %> tile Plans.
+After successfully deploying BOSH, ensure that both the Management AZ and the Compute AZs appear in the Tanzu Kubernetes Grid Integrated Edition tile Plans.
To ensure that the Management AZ and the Compute AZs are included in the `NET-MGMT-TKGI` network you defined above:
@@ -377,4 +377,4 @@ To ensure that the Management AZ and the Compute AZs are included in the `NET-MG
## Next Step
-Generate and Register the NSX Manager Superuser Principal Identity Certificate and Key for <%= vars.product_short %>.
+Generate and Register the NSX Manager Superuser Principal Identity Certificate and Key for Tanzu Kubernetes Grid Integrated Edition.
diff --git a/vsphere-nsxt-om-deploy.html.md.erb b/vsphere-nsxt-om-deploy.html.md.erb
index ffebcc74c..1cda84471 100644
--- a/vsphere-nsxt-om-deploy.html.md.erb
+++ b/vsphere-nsxt-om-deploy.html.md.erb
@@ -3,23 +3,23 @@ title: Deploying Ops Manager with VMware NSX for Tanzu Kubernetes Grid Integrate
owner: Ops Manager
---
-This topic describes how to deploy <%= vars.ops_manager_full %> (<%= vars.ops_manager %>) on VMware vSphere with NSX integration for use with <%= vars.product_full %> (<%= vars.k8s_runtime_abbr %>).
+This topic describes how to deploy VMware Tanzu Operations Manager (Ops Manager) on VMware vSphere with NSX integration for use with VMware Tanzu Kubernetes Grid Integrated Edition (TKGI).
##Prerequisites
-Before deploying Ops Manager with NSX for <%= vars.product_short %>, you must have completed the following tasks:
+Before deploying Ops Manager with NSX for Tanzu Kubernetes Grid Integrated Edition, you must have completed the following tasks:
---|
160 | ||||
<%= vars.control_plane %> | +TKGI API | 2 | 8 | 64 |
<%= vars.control_plane_db %> | +TKGI Database | 2 | 8 | 64 | @@ -65,11 +65,11 @@ Installing Ops Manager and <%= vars.product_short %> requires the following virt