Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add docs (GKE part 8) #834

Merged
merged 1 commit into from
Feb 23, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions docs/book/src/topics/gke/cluster-upgrades.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# GKE Cluster Upgrades

## Control Plane Upgrade

Upgrading the Kubernetes version of the control plane is supported by the provider. To perform an upgrade you need to update the `controlPlaneVersion` in the spec of the `GCPManagedControlPlane`. Once the version has changed the provider will handle the upgrade for you.
31 changes: 31 additions & 0 deletions docs/book/src/topics/gke/creating-a-cluster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Creating a GKE cluster

New "gke" cluster templates have been created that you can use with `clusterctl` to create a GKE cluster.

To create a GKE cluster with a managed node group (a.k.a managed machine pool):

```bash
clusterctl generate cluster capi-gke-quickstart --flavor gke --worker-machine-count=3 > capi-gke-quickstart.yaml
```

## Kubeconfig

When creating an GKE cluster 2 kubeconfigs are generated and stored as secrets in the management cluster.

### User kubeconfig

This should be used by users that want to connect to the newly created GKE cluster. The name of the secret that contains the kubeconfig will be `[cluster-name]-user-kubeconfig` where you need to replace **[cluster-name]** with the name of your cluster. The **-user-kubeconfig** in the name indicates that the kubeconfig is for the user use.

To get the user kubeconfig for a cluster named `managed-test` you can run a command similar to:

```bash
kubectl --namespace=default get secret managed-test-user-kubeconfig \
-o jsonpath={.data.value} | base64 --decode \
> managed-test.kubeconfig
```

### Cluster API (CAPI) kubeconfig

This kubeconfig is used internally by CAPI and shouldn't be used outside of the management server. It is used by CAPI to perform operations, such as draining a node. The name of the secret that contains the kubeconfig will be `[cluster-name]-kubeconfig` where you need to replace **[cluster-name]** with the name of your cluster. Note that there is NO `-user` in the name.

The kubeconfig is regenerated every `sync-period` as the token that is embedded in the kubeconfig is only valid for a short period of time.
3 changes: 3 additions & 0 deletions docs/book/src/topics/gke/disabling.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Disabling GKE Support

Support for GKE is disabled by default when you use the GCP infrastructure provider.
8 changes: 8 additions & 0 deletions docs/book/src/topics/gke/enabling.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Enabling GKE Support

Enabling GKE support is done via the **GKE** feature flag by setting it to true. This can be done before running `clusterctl init` by using the **EXP_CAPG_GKE** environment variable:

```shell
export EXP_CAPG_GKE=true
clusterctl init --infrastructure gcp
```
27 changes: 27 additions & 0 deletions docs/book/src/topics/gke/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# GKE Support in the GCP Provider

- **Feature status:** Experimental
- **Feature gate (required):** GKE=true

## Overview

The GCP provider supports creating GKE based cluster. Currently the following features are supported:

- Provisioning/managing a GCP GKE Cluster
- Upgrading the Kubernetes version of the GKE Cluster
- Creating a managed node pool and attaching it to the GKE cluster

The implementation introduces the following CRD kinds:

- GCPManagedCluster - presents the properties needed to provision and manage the general GCP operating infrastructure for the cluster (i.e project, networking, iam)
- GCPManagedControlPlane - specifies the GKE Cluster in GCP and used by the Cluster API GCP Managed Control plane
- GCPManagedMachinePool - defines the managed node pool for the cluster

And a new template is available in the templates folder for creating a managed workload cluster.

## SEE ALSO

* [Enabling GKE Support](enabling.md)
* [Disabling GKE Support](disabling.md)
* [Creating a cluster](creating-a-cluster.md)
* [Cluster Upgrades](cluster-upgrades.md)