Skip to content

Commit

Permalink
docs: correct spelling in docs
Browse files Browse the repository at this point in the history
Correct spelling to improve the readability of the documentation
  • Loading branch information
Vladimir Belousov committed Nov 1, 2021
1 parent 7f60703 commit 51a4034
Show file tree
Hide file tree
Showing 16 changed files with 33 additions and 27 deletions.
2 changes: 1 addition & 1 deletion docs/design/openstack/networking-infrastructure.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ Keepalived. While the bootstrap node is up, it will have priority running the AP

The Master nodes run dhcp, HAProxy, CoreDNS, and Keepalived. Haproxy loadbalances incoming requests
to the API across all running masters. It also runs a stats and healthcheck server. Keepalived manages both VIPs on the master, where each
master has an equal chance of being assigned one of the VIPs. Initially, the bootstrap node has the highest priority for hosting the API VIP, so they will point to addresses there at startup. Meanwhile, the master nodes will try to get the control plane, and the OpenShift API up. Keepalived implements periodic health checks for each VIP that are used to determine the weight assigned to each server. The server with the highest weight is assigned the VIP. Keepalived has two seperate healthchecks that attempt to reach the OpenShift API and CoreDNS on the localhost of each master node. When the API on a master node is reachable, Keepalived substantially increases it's weight for that VIP, making its priority higher than that of the bootstrap node and any node that does not yet have the that service running. This ensures that nodes that are incapable of serving DNS records or the OpenShift API do not get assigned the respective VIP. The Ingress VIP is also managed by a healthcheck that queries for an OCP Router HAProxy healthcheck, not the HAProxy we stand up in static pods for the API. This makes sure that the Ingress VIP is pointing to a server that is running the necessary OpenShift Ingress Operator resources to enable external access to the node.
master has an equal chance of being assigned one of the VIPs. Initially, the bootstrap node has the highest priority for hosting the API VIP, so they will point to addresses there at startup. Meanwhile, the master nodes will try to get the control plane, and the OpenShift API up. Keepalived implements periodic health checks for each VIP that are used to determine the weight assigned to each server. The server with the highest weight is assigned the VIP. Keepalived has two separate healthchecks that attempt to reach the OpenShift API and CoreDNS on the localhost of each master node. When the API on a master node is reachable, Keepalived substantially increases it's weight for that VIP, making its priority higher than that of the bootstrap node and any node that does not yet have the that service running. This ensures that nodes that are incapable of serving DNS records or the OpenShift API do not get assigned the respective VIP. The Ingress VIP is also managed by a healthcheck that queries for an OCP Router HAProxy healthcheck, not the HAProxy we stand up in static pods for the API. This makes sure that the Ingress VIP is pointing to a server that is running the necessary OpenShift Ingress Operator resources to enable external access to the node.

The Worker Nodes run dhcp, CoreDNS, and Keepalived. On workers, Keepalived is only responsible for managing
the Ingress VIP. It's algorithm is the same as the one run on the masters.
4 changes: 2 additions & 2 deletions docs/dev/libvirt/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ First, you need to start the libvirtd TCP socket, which is managed by systemd:
sudo systemctl start libvirtd-tcp.socket
```

To make this change persistent accross reboots you can optionally enable it:
To make this change persistent across reboots you can optionally enable it:

```sh
sudo systemctl enable libvirtd-tcp.socket
Expand Down Expand Up @@ -415,7 +415,7 @@ FATA[0019] failed to run Terraform: exit status 1

it is likely that your install configuration contains three backslashes after the protocol (e.g. `qemu+tcp:///...`), when it should only be two.

### Random domain creation errors due to libvirt race conditon
### Random domain creation errors due to libvirt race condition

Depending on your libvirt version you might encounter [a race condition][bugzilla_libvirt_race] leading to an error similar to:

Expand Down
2 changes: 1 addition & 1 deletion docs/user/aws/limits.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ For multiple clusters, a higher limit will likely be required (and will certainl

### Example: Using North Virginia (us-east-1)

North Virginia (us-east-1) has six availablity zones, so a higher limit is required unless you configure your cluster to use fewer zones.
North Virginia (us-east-1) has six availability zones, so a higher limit is required unless you configure your cluster to use fewer zones.
To support the default, all-zone installation, please submit a limit increase for VPC Elastic IPs similar to the following in the support dashboard (to create more than one cluster, a higher limit will be necessary):

![Increase Elastic IP limit in AWS](images/support_increase_elastic_ip.png)
Expand Down
4 changes: 2 additions & 2 deletions docs/user/azure/install_upi.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ example.
* the following binaries installed and in $PATH:
* [openshift-install][openshiftinstall]
* It is recommended that the OpenShift installer CLI version is the same of the cluster being deployed. The version used in this example is 4.3.0 GA.
* [az (Azure CLI)][azurecli] installed and aunthenticated
* [az (Azure CLI)][azurecli] installed and authenticated
* Commands flags and structure may vary between `az` versions. The recommended version used in this example is 2.0.80.
* python3
* [jq][jqjson]
Expand Down Expand Up @@ -455,7 +455,7 @@ csr-wpvxq 19m system:serviceaccount:openshift-machine-config-operator:node-
csr-xpp49 19m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
```

You should inspect each pending CSR with the `oc describe csr <name>` command and verify that it comes from a node you recognise. If it does, they can be approved:
You should inspect each pending CSR with the `oc describe csr <name>` command and verify that it comes from a node you recognize. If it does, they can be approved:

```console
$ oc adm certificate approve csr-8bppf csr-dj2w4 csr-ph8s8
Expand Down
6 changes: 3 additions & 3 deletions docs/user/azure/limits.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ A public IP address is also created for the bootstrap machine during installatio
## Network Security Groups

Each cluster creates network security groups for every subnet within the VNet. The default install creates network
security groups for the control plane and for the compuete node subnets. The default limit of 5000 for new accounts
security groups for the control plane and for the compute node subnets. The default limit of 5000 for new accounts
allows for many clusters to be created. The network security groups which exist after the default install are:

1. controlplane
Expand Down Expand Up @@ -94,13 +94,13 @@ By default, each cluster will create 3 network load balancers. The default limit
3. external
* Public IP address that load balances requests to port 6443 across control-plane nodes

Additional Kuberntes LoadBalancer Service objects will create additional [load balancers][load-balancing].
Additional Kubernetes LoadBalancer Service objects will create additional [load balancers][load-balancing].


## Increasing limits


To increase a limit beyond the maximum, a suppport request will need to be filed.
To increase a limit beyond the maximum, a support request will need to be filed.

First, click on "help + support". It is located on the bottom left menu.

Expand Down
4 changes: 2 additions & 2 deletions docs/user/customization.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ The following `install-config.yaml` properties are available:
The default is 10.128.0.0/14 with a host prefix of /23.
* `cidr` (required [IP network](#ip-networks)): The IP block address pool.
* `hostPrefix` (required integer): The prefix size to allocate to each node from the CIDR.
For example, 24 would allocate 2^8=256 adresses to each node. If this field is not used by the plugin, it can be left unset.
For example, 24 would allocate 2^8=256 addresses to each node. If this field is not used by the plugin, it can be left unset.
* `machineNetwork` (optional array of objects): The IP address pools for machines.
* `cidr` (required [IP network](#ip-networks)): The IP block address pool.
The default is 10.0.0.0/16 for all platforms other than libvirt.
Expand Down Expand Up @@ -72,7 +72,7 @@ For example, 10.0.0.0/16 represents IP addresses 10.0.0.0 through 10.0.255.255.

The following machine-pool properties are available:

* `architecture` (optional string): Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture.
* `architecture` (optional string): Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture.
Valid values are `amd64` (the default).
* `hyperthreading` (optional string): Determines the mode of hyperthreading that machines in the pool will utilize.
Valid values are `Enabled` (the default) and `Disabled`.
Expand Down
2 changes: 1 addition & 1 deletion docs/user/gcp/limits.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ A standard OpenShift installation creates 2 forwarding rules.
A standard OpenShift installation creates 3 in-use global IP addresses.

### Networks
A standard OpenShift instlalation creates 2 networks.
A standard OpenShift installation creates 2 networks.

### Routers
A standard OpenShift installation creates 1 router.
Expand Down
2 changes: 1 addition & 1 deletion docs/user/metal/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

OpenShift has support for bare metal deployments with either [User
provided infrastructure (UPI)](install_upi.md), or [Installer-provided
instrastructure (IPI)](install_ipi.md).
infrastructure (IPI)](install_ipi.md).

The following is a summary of key differences:

Expand Down
2 changes: 1 addition & 1 deletion docs/user/metal/customization_ipi.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ and TFTP server in the cluster to support provisioning. Much of this can
be customized.


* `provisioningNetorkCIDR` (optional string): Override the default provisioning network.
* `provisioningNetworkCIDR` (optional string): Override the default provisioning network.
* `bootstrapProvisioningIP` (optional string): Override the bootstrap
provisioning IP. If unspecified, uses the 2nd address in the
provisioning network's subnet.
Expand Down
6 changes: 6 additions & 0 deletions docs/user/openstack/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,15 @@ In addition, it covers the installation with the default CNI (OpenShiftSDN), as
- [Destroying The Cluster](#destroying-the-cluster)
- [Post Install Operations](#post-install-operations)
- [Adding a MachineSet](#adding-a-machineset)
- [Defining a MachineSet That Uses Multiple Networks](#defining-a-machineset-that-uses-multiple-networks)
- [Using a Server Group](#using-a-server-group)
- [Setting Nova Availability Zones](#setting-nova-availability-zones)
- [Using a Custom External Load Balancer](#using-a-custom-external-load-balancer)
- [External Facing OpenShift Services](#external-facing-openshift-services)
- [HAProxy Example Load Balancer Config](#haproxy-example-load-balancer-config)
- [DNS Lookups](#dns-lookups)
- [Verifying that the API is Reachable](#verifying-that-the-api-is-reachable)
- [Verifying that Apps Reachable](#verifying-that-apps-reachable)
- [Reconfiguring cloud provider](#reconfiguring-cloud-provider)
- [Modifying cloud provider options](#modifying-cloud-provider-options)
- [Refreshing a CA Certificate](#refreshing-a-ca-certificate)
Expand Down
6 changes: 3 additions & 3 deletions docs/user/openstack/install_upi.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ of this method of installation.

## Prerequisites

The file `inventory.yaml` contains the variables most likely to need customisation.
The file `inventory.yaml` contains the variables most likely to need customization.
**NOTE**: some of the default pods (e.g. the `openshift-router`) require at least two nodes so that is the effective minimum.

The requirements for UPI are broadly similar to the [ones for OpenStack IPI][ipi-reqs]:
Expand Down Expand Up @@ -580,7 +580,7 @@ Possible choices include:
* Swift (see Example 1 below);
* Glance (see Example 2 below);
* Amazon S3;
* Internal web server inside your organisation;
* Internal web server inside your organization;
* A throwaway Nova server in `$INFRA_ID-nodes` hosting a static web server exposing the file.

In this guide, we will assume the file is at the following URL:
Expand Down Expand Up @@ -932,7 +932,7 @@ csr-lrtlk 15m system:serviceaccount:openshift-machine-config-operator:node-
csr-wkm94 16m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
```

You should inspect each pending CSR and verify that it comes from a node you recognise:
You should inspect each pending CSR and verify that it comes from a node you recognize:

```sh
$ oc describe csr csr-88jp8
Expand Down
2 changes: 1 addition & 1 deletion docs/user/openstack/privileges.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Required Privileges

In order to succesfully deploy an OpenShift cluster on OpenStack, the user passed to the installer needs a particular set of permissions in a given project. Our recommendation is to create a user in the project that you intend to install your cluster onto with the role *member*. In the event that you want to customize the permissions for a more restricted install, the following use cases can accomodate them.
In order to successfully deploy an OpenShift cluster on OpenStack, the user passed to the installer needs a particular set of permissions in a given project. Our recommendation is to create a user in the project that you intend to install your cluster onto with the role *member*. In the event that you want to customize the permissions for a more restricted install, the following use cases can accommodate them.

## Bring Your Own Networks

Expand Down
6 changes: 3 additions & 3 deletions docs/user/ovirt/install_upi.md
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ the URL related to the `OpenStack` qcow2 image type, like in the example below
https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/4.6.0-0.nightly-2020-07-16-122837/rhcos-4.6.0-0.nightly-2020-07-16-122837-x86_64-openstack.x86_64.qcow2.gz
```

The version of the image should be choosen according to the OpenShift version you're about to install (in general less than or equal to the OCP
The version of the image should be chosen according to the OpenShift version you're about to install (in general less than or equal to the OCP
version).
Once you have the URL set in the [inventory.yml](../../../upi/ovirt/inventory.yml) a dedicated Ansible playbook will be in charge to download the `qcow2.gz` file, uncompress it
in a specified folder and use it to create oVirt/RHV templates.
Expand Down Expand Up @@ -478,7 +478,7 @@ parameters needed to reach the oVirt/RHV engine and use its REST API.
**NOTE:**
Some of the parameters added during the `openshift-install` workflow, in particular the `Internal API virtual IP` and
`Ingress virtual IP`, will not be used because already configured in your infrastructure DNS (see [DNS](#dns) section).
Other paramenters like `oVirt cluster`, `oVirt storage`, `oVirt network`, will be used as specified in the [inventory.yml](../../../upi/ovirt/inventory.yml)
Other parameters like `oVirt cluster`, `oVirt storage`, `oVirt network`, will be used as specified in the [inventory.yml](../../../upi/ovirt/inventory.yml)
and removed from the `install-config.yaml` with the previously mentioned `virtual IPs`, using a script reported in a
[section below](#set-platform-to-none).

Expand Down Expand Up @@ -612,7 +612,7 @@ The `infraID` will be used by the UPI Ansible playbooks as prefix for the VMs cr
process avoiding name clashes in case of multiple installations in the same oVirt/RHV cluster.

**Note:** certificates contained into ignition config files expire after 24 hours. You must complete the cluster installation
and keep the cluster running for 24 hours in a non-degradated state to ensure that the first certificate rotation has finished.
and keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.


## Create templates and VMs
Expand Down
8 changes: 4 additions & 4 deletions docs/user/troubleshootingbootstrap.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ This directory contains all the operators or their operands running on the boots
For each container the directory has two files,

* `<human readable id>.log`, which contains the log of the container.
* `<human readable id>.inspect`, which containts the information about the container like the image, volume mounts, arguments etc.
* `<human readable id>.inspect`, which contains the information about the container like the image, volume mounts, arguments etc.

#### directory: bootstrap/journals

Expand All @@ -107,7 +107,7 @@ The journals directory contains the logs for *important* systemd units. These un
* `crio-configure.log` and `crio.log`, these units are responsible for configuring the CRI-O on the bootstrap host and CRI-O daemon respectively.
* `kubelet.log`, the kubelet service is responsible for running the kubelet on the bootstrap host. The kubelet on the bootstrap host is responsible for running the static pods for etcd, bootstrap-kube-controlplane and various other operators in bootstrap mode.
* `approve-csr.log`, the approve-csr unit is responsible for allowing control-plane machines to join OpenShift cluster. This unit performs the job of in-cluster approver while the bootstrapping is in progress.
* `bootkube.log`, the bootkube service is the unit that performs the bootstrapping of OpenShift clusters using all the operators. This service is respnsible for running all the required steps to bootstrap the API and then wait for success.
* `bootkube.log`, the bootkube service is the unit that performs the bootstrapping of OpenShift clusters using all the operators. This service is responsible for running all the required steps to bootstrap the API and then wait for success.

There might also be other services that are important for some platforms like OpenStack, that will have logs in this directory.

Expand All @@ -118,7 +118,7 @@ The pods directory contains the information and logs from all the render command
For each container the directory has two files,

* `<human readable id>.log`, which contains the log of the container.
* `<human readable id>.inspect`, which containts the information about the container like the image, volume mounts, arguments etc.
* `<human readable id>.inspect`, which contains the information about the container like the image, volume mounts, arguments etc.

### directory: resources

Expand Down Expand Up @@ -216,4 +216,4 @@ control-plane
3 directories, 0 files
```

The troubleshooting would require the logs of the installer gathering the log bundle, which are easily availble in `.openshift_install.log`.
The troubleshooting would require the logs of the installer gathering the log bundle, which are easily available in `.openshift_install.log`.
2 changes: 1 addition & 1 deletion docs/user/vsphere/install_upi.md
Original file line number Diff line number Diff line change
Expand Up @@ -284,7 +284,7 @@ The Ignition config created by the OpenShift Installer cannot be used directly b

The hostname of each control plane and worker machine must be resolvable from all nodes within the cluster.

Preferrably, the hostname and IP address will be set via DHCP.
Preferably, the hostname and IP address will be set via DHCP.

If you need to manually set a hostname and/or configure a static IP address, you can pass a custom networking command-line `ip=` parameter to Afterburn for configuration. In order to do so, set the vApp property `guestinfo.afterburn.initrd.network-kargs` to the `ip` parameter using this format: `ip=<ip_address>::<gateway>:<netmask>:<hostname>:<iface>:<protocol>:<dns_address>`, e.g. `ip=10.0.0.2::10.0.0.2:255.255.255.0:compute-1:ens192:none:8.8.8.8`

Expand Down
2 changes: 1 addition & 1 deletion docs/user/vsphere/vips-dns.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# IP Adresses
# IP Addresses

An installer-provisioned vSphere installation requires two static IP addresses:

Expand Down

0 comments on commit 51a4034

Please sign in to comment.