diff --git a/README.md b/README.md index 511470a0..4c6bf620 100644 --- a/README.md +++ b/README.md @@ -1,147 +1,26 @@ # Cluster Stack Provider OpenStack [![GitHub Latest Release](https://img.shields.io/github/v/release/SovereignCloudStack/cluster-stack-provider-openstack?logo=github)](https://github.com/SovereignCloudStack/cluster-stack-provider-openstack/releases) -[![Go Report Card](https://goreportcard.com/badge/github.com/sovereignCloudStack/cluster-stack-provider-openstack)](https://goreportcard.com/report/github.com/sovereignCloudStack/cluster-stack-provider-openstack) +[![Go Report Card](https://goreportcard.com/badge/github.com/SovereignCloudStack/cluster-stack-provider-openstack)](https://goreportcard.com/report/github.com/SovereignCloudStack/cluster-stack-provider-openstack) [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) -The Cluster Stack Provider OpenStack (CSPO) works with the Cluster Stack Operator (CSO) and Cluster Stacks, enabling the creation of Kubernetes clusters in a Cluster-API-native (CAPI) fashion. +## Overview -The primary goal of the CSPO is to facilitate the import of node images in a manner specific to OpenStack. These images are then used to create Kubernetes workload clusters on top of the OpenStack infrastructure. +Refer to the [overview page](./docs/overview.md) in the `docs` directory. -To gain a comprehensive understanding of the entire concept, we recommend familiarizing yourself with the fundamental [concepts](https://github.com/SovereignCloudStack/cluster-stack-operator/blob/main/docs/concept.md) and [architecture](https://github.com/SovereignCloudStack/cluster-stack-operator/blob/main/docs/architecture/overview.md) outlined in [CSO](https://github.com/SovereignCloudStack/cluster-stack-operator/blob/main/README.md) and [Cluster Stacks](https://github.com/SovereignCloudStack/cluster-stacks/blob/main/README.md). +## Quickstart -# Quickstart Guide +Refer to the [quickstart page](./docs/quickstart.md) in the `docs` directory. -This section guides you through all the necessary steps to create a workload Kubernetes cluster on top of the OpenStack infrastructure. The guide describes a path that utilizes the [clusterctl] CLI tool to manage the lifecycle of a CAPI management cluster and employs [kind] to create a local non-production management cluster. +## Developer Guide -Note that it is a common practice to create a temporary, local [bootstrap cluster](https://cluster-api.sigs.k8s.io/reference/glossary#bootstrap-cluster) which is then used to provision a target [management cluster](https://cluster-api.sigs.k8s.io/reference/glossary#management-cluster) on the selected infrastructure. +Refer to the [developer guide page](./docs/develop.md) to find more information about how to develop this operator. -## Prerequisites +## Documentation -- Install [Docker] and [kind] -- Install [kubectl] -- Install [clusterctl] -- Install [go] # installation of the Go package `envsubst` is required to enable the expansion of variables specified in CSPO and CSO manifests. +Explore the documentation stored in the [docs](./docs) directory or view the rendered version online at . -## Initialize the management cluster - -Create the kind cluster: - -```bash -kind create cluster -``` - -Transform the Kubernetes cluster into a management cluster by using `clusterctl init` and bootstrap it with CAPI and Cluster API Provider OpenStack ([CAPO]) components: - -```bash -# Enable Cluster Class CAPI experimental feature -export CLUSTER_TOPOLOGY=true - -# Install CAPI and CAPO components -clusterctl init --infrastructure openstack -``` - -### Create a secret for OpenStack access - -To enable communication between the CSPO and the Cluster API Provider for OpenStack (CAPO) with the OpenStack API, it is necessary to generate a secret containing the access data (clouds.yaml). -Ensure that this secret is located in the identical namespace as the other Custom Resources. - -> [!NOTE] -> The default value of `cloudName` is configured as `openstack`. This setting can be overridden by including the `cloudName` key in the secret. Also, be aware that the name of the secret is expected to be `openstack` unless it is not set differently in OpenStackClusterStackReleaseTemplate in `identityRef.name` field. - -```bash -kubectl create secret generic openstack --from-file=clouds.yaml=path/to/clouds.yaml - -# Patch the created secrets so they are automatically moved to the target cluster later. - -kubectl patch secret openstack -p '{"metadata":{"labels":{"clusterctl.cluster.x-k8s.io/move":""}}}' -``` - -### CSO and CSPO variables preparation - -The CSO and CSPO must be directed to the Cluster Stacks repository housing releases for the OpenStack provider. -Modify and export the following environment variables if you wish to redirect CSO and CSPO to an alternative Git repository - -Be aware that GitHub enforces limitations on the number of API requests per unit of time. To overcome this, -it is recommended to configure a personal access token for authenticated calls. This will significantly increase the rate limit for GitHub API requests. - -```bash -export GIT_PROVIDER_B64=Z2l0aHVi # github -export GIT_ORG_NAME_B64=U292ZXJlaWduQ2xvdWRTdGFjaw== # SovereignCloudStack -export GIT_REPOSITORY_NAME_B64=Y2x1c3Rlci1zdGFja3M= # cluster-stacks -export GIT_ACCESS_TOKEN_B64= -``` - -### CSO and CSPO deployment - -Install the [envsubst] Go package. It is required to enable the expansion of variables specified in CSPO and CSO manifests. - -```bash -GOBIN=/tmp go install github.com/drone/envsubst/v2/cmd/envsubst@latest -``` - -Get the latest CSO release version and apply CSO manifests to the management cluster. - -```bash -# Get the latest CSO release version -CSO_VERSION=$(curl https://api.github.com/repos/SovereignCloudStack/cluster-stack-operator/releases/latest -s | jq .name -r) -# Apply CSO manifests -curl -sSL https://github.com/sovereignCloudStack/cluster-stack-operator/releases/download/${CSO_VERSION}/cso-infrastructure-components.yaml | /tmp/envsubst | kubectl apply -f - -``` - -Get the latest CSPO release version and apply CSPO manifests to the management cluster. - -```bash -# Get the latest CSPO release version -CSPO_VERSION=$(curl https://api.github.com/repos/SovereignCloudStack/cluster-stack-provider-openstack/releases/latest -s | jq .name -r) -# Apply CSPO manifests -curl -sSL https://github.com/SovereignCloudStack/cluster-stack-provider-openstack/releases/download/${CSPO_VERSION}/cspo-infrastructure-components.yaml | /tmp/envsubst | kubectl apply -f - -``` - -## Create the workload cluster - -To transfer the credentials stored in the mentioned secret [above](#create-a-secret-for-openstack-access) to the operator, -create an `OpenStackClusterStackReleaseTemplate` object and specify this secret in the `identityRef` field. -The `clouds.yaml` file may contain one or more clouds, so users must specify the desired connection to a specific cloud using the `cloudName` field. -Refer to the [examples/cspotemplate.yaml](./examples/cspotemplate.yaml) file for more details. - -Next, apply this template to the management cluster: - -```bash -kubectl apply -f -``` - -Proceed to apply the `ClusterStack` to the management cluster. For more details, refer to [examples/clusterstack.yaml](./examples/clusterstack.yaml): - -```bash -kubectl apply -f -``` - -Please be patient and wait for the operator to execute the necessary tasks. -If your `ClusterStack` object encounters no errors and `openstacknodeimagereleases` is ready, you can deploy a workload cluster. -This can be done by applying the cluster-template. -Refer to the example of this template in [examples/cluster.yaml](./examples/cluster.yaml): - -```bash -kubectl apply -f -``` - -Utilize a convenient CLI `clusterctl` to investigate the health of the cluster: - -```bash -clusterctl describe cluster -``` - -Once the cluster is provisioned and in good health, you can retrieve its kubeconfig and establish communication with the newly created workload cluster: - -```bash -# Get the workload cluster kubeconfig -clusterctl get kubeconfig > kubeconfig.yaml -# Communicate with the workload cluster -kubectl --kubeconfig kubeconfig.yaml get nodes -``` - -# Compatibility with Cluster Stack Operator +## Compatibility with Cluster Stack Operator | | CSO `v0.1.0-alpha.2` | CSO `v0.1.0-alpha.3` | | ----------------------- | -------------------- | -------------------- | @@ -149,28 +28,14 @@ kubectl --kubeconfig kubeconfig.yaml get nodes | CSPO `v0.1.0-alpha.1` | ✓ | ✓ | | CSPO `v0.1.0-alpha.2` | ✓ | ✓ | -# Development guide - -Refer to the [doc page](./docs/develop.md) to find more information about how to develop this operator. - -# Controllers +## Controllers CSPO consists of two controllers. They should ensure that the desired node images are present in the targeted OpenStack project. -Refer to the documentation for the CSPO controllers: -- [OpenStackClusterStackRelease controller](./docs/openstackclusterstackrelease-controller.md) -- [OpenStackNodeImageRelease controller](./docs/openstacknodeimagerelease-controller.md) +Refer to the documentation for the CSPO [controllers](./docs/controllers.md) in the `docs` directory. -# API Reference +## API Reference CSPO currently exposes the following APIs: + - the CSPO Custom Resource Definitions (CRDs): [documentation](https://doc.crds.dev/github.com/SovereignCloudStack/cluster-stack-provider-openstack) - Golang APIs: tbd - - -[Docker]: https://www.docker.com/ -[kind]: https://kind.sigs.k8s.io/ -[kubectl]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ -[clusterctl]: https://cluster-api.sigs.k8s.io/user/quick-start.html#install-clusterctl -[CAPO]: https://github.com/kubernetes-sigs/cluster-api-provider-openstack -[go]: https://go.dev/doc/install -[envsubst]: https://github.com/drone/envsubst diff --git a/docs/openstackclusterstackrelease-controller.md b/docs/controllers.md similarity index 51% rename from docs/openstackclusterstackrelease-controller.md rename to docs/controllers.md index 5388fa0d..95f68469 100644 --- a/docs/openstackclusterstackrelease-controller.md +++ b/docs/controllers.md @@ -1,5 +1,7 @@ -# OpenStackClusterStackRelease controller +# Controllers + +## OpenStackClusterStackRelease controller The OpenStackClusterStackRelease controller’s main responsibilities are: @@ -9,3 +11,14 @@ The OpenStackClusterStackRelease controller’s main responsibilities are: - Update the OpenStackClusterStackRelease status to `ready` once all related OpenStackNodeImageReleases are also `ready` ![OSCSR controller](./images/openstackclusterstackrelease-controller.png "OSCSR controller") + +## OpenStackNodeImageRelease controller + +The OpenStackNodeImageRelease controller’s main responsibilities are: + +- Load the OpenStack Cloud configuration from the Secret referenced in `spec.IdentityRef` +- Create an Image as defined by `spec.Image` if it does not already exist in the specified OpenStack project +- Instruct the OpenStack Glance service to import an Image from the provided URL +- Set the OpenStackNodeImageRelease status to `ready` once the image achieves an Active status + +![OSNIR controller](./images/openstacknodeimagerelease-controller.png "OSNIR controller") diff --git a/docs/develop.md b/docs/develop.md index 620ccf64..69283d16 100644 --- a/docs/develop.md +++ b/docs/develop.md @@ -1,6 +1,6 @@ -# Develop Cluster Stack Provider OpenStack +# Developer Guide -Developing our operator is quite straightforward. First, you need to install some basic prerequisites: +Developing Cluster Stack Provider OpenStack operator is quite straightforward. First, you need to install some basic prerequisites: - Docker - Go @@ -51,34 +51,41 @@ make delete-bootstrap-cluster If you have any trouble finding the right command, then you can use `make help` to get a list of all available make targets. -## Toggle between local_mode and remote mode -We can retrieve cluster-stacks in two modes. One way is to let the controller fetch it from GitHub which is remote mode and other is we mount the cluster-stacks inside the container at `/tmp/downloads/cluster-stacks` directory. +## Toggle between local_mode and remote mode + +We can retrieve cluster-stacks in two modes. One way is to let the controller fetch it from GitHub which is remote mode and other is we mount the cluster-stacks inside the container at `/tmp/downloads/cluster-stacks` directory. > [!NOTE] -> Using remote mode is the default behavior. +> Using remote mode is the default behavior. Switching between both modes is relatively simple if you're using Tilt. There is a file at the root of the repo `tilt-settings.yaml.example` Make a copy of that file with the name of `tilt-settings.yaml` + ```bash cp tilt-settings.yaml.example tilt-settings.yaml ``` + Now, open the file and set the `local_mode` to `true` to use cluster-stacks in local_mode. It should look the following content wise. + ```yaml local_mode: true ``` > [!NOTE] -> In this mode you need to have cluster-stacks present locally. +> In this mode you need to have cluster-stacks present locally. + +Downloading cluster-stacks can be achieved by many ways but below is a simple way to download it quickly. -Downloading cluster-stacks can be achieved by many ways but below is a simple way to download it quickly. ```bash mkdir -p .release/openstack-scs-1-27-v1/ cd .release/openstack-scs-1-27-v1 gh release download --repo sovereigncloudstack/cluster-stacks openstack-scs-1-27-v1 ``` + Change the repo and tag as per the requirement. You can also download it directly from browser and move it to `.release` directory. Please make sure the directory structure remains the same otherwise you'll not be able to start the tilt setup. Here's an example of structuring `openstack-scs-1-27-v1` cluster-stack. + ```bash $ tree .release/openstack-scs-1-27-v1/ .release/openstack-scs-1-27-v1/ @@ -90,4 +97,4 @@ $ tree .release/openstack-scs-1-27-v1/ > [!IMPORTANT] There's an alternative way to get clusterstacks using [csmctl](https://github.com/sovereigncloudstack/csmctl). You can follow the README of csmctl for specific instructions and a good quickstart. -You can use `csmctl create` subcommand to create clusterstack locally. You'll need a csmctl.yaml file in the cluster-stack configuration directory. Please read more about creating configuration file for csmctl in the csmctl docs. \ No newline at end of file +You can use `csmctl create` subcommand to create clusterstack locally. You'll need a csmctl.yaml file in the cluster-stack configuration directory. Please read more about creating configuration file for csmctl in the csmctl docs. diff --git a/docs/images/openstackclusterstackrelease-controller.plantuml b/docs/images/openstackclusterstackrelease-controller.plantuml index 19cbb011..a00eb33a 100644 --- a/docs/images/openstackclusterstackrelease-controller.plantuml +++ b/docs/images/openstackclusterstackrelease-controller.plantuml @@ -5,8 +5,8 @@ start; repeat :OpenStackClusterStackRelease controller enqueues a Reconcile call; - :Create GitHub client; if (Release assets have been download into the CSPO container) then (no) + :Create GitHub client; #LightBlue:Download Release assets; #Pink:Return RequeueError; note left: make sure that Release can be accessed diff --git a/docs/images/openstackclusterstackrelease-controller.png b/docs/images/openstackclusterstackrelease-controller.png index ea5ead9f..02402123 100644 Binary files a/docs/images/openstackclusterstackrelease-controller.png and b/docs/images/openstackclusterstackrelease-controller.png differ diff --git a/docs/openstacknodeimagerelease-controller.md b/docs/openstacknodeimagerelease-controller.md deleted file mode 100644 index c49a97b0..00000000 --- a/docs/openstacknodeimagerelease-controller.md +++ /dev/null @@ -1,9 +0,0 @@ -# OpenStackNodeImageRelease controller - -The OpenStackNodeImageRelease controller’s main responsibilities are: -- Load the OpenStack Cloud configuration from the Secret referenced in `spec.IdentityRef` -- Create an Image as defined by `spec.Image` if it does not already exist in the specified OpenStack project -- Instruct the OpenStack Glance service to import an Image from the provided URL -- Set the OpenStackNodeImageRelease status to `ready` once the image achieves an Active status - -![OSNIR controller](./images/openstacknodeimagerelease-controller.png "OSNIR controller") diff --git a/docs/overview.md b/docs/overview.md new file mode 100644 index 00000000..f13c6c91 --- /dev/null +++ b/docs/overview.md @@ -0,0 +1,7 @@ +# Overview + +The Cluster Stack Provider OpenStack (CSPO) works with the Cluster Stack Operator (CSO) and Cluster Stacks, enabling the creation of Kubernetes clusters in a Cluster-API-native (CAPI) fashion. + +The primary goal of the CSPO is to facilitate the import of node images in a manner specific to OpenStack. These images are then used to create Kubernetes workload clusters on top of the OpenStack infrastructure. + +To gain a comprehensive understanding of the entire concept, we recommend familiarizing yourself with the fundamental [concepts](https://github.com/SovereignCloudStack/cluster-stack-operator/blob/main/docs/concept.md) and [architecture](https://github.com/SovereignCloudStack/cluster-stack-operator/blob/main/docs/architecture/overview.md) outlined in [CSO](https://github.com/SovereignCloudStack/cluster-stack-operator/blob/main/README.md) and [Cluster Stacks](https://github.com/SovereignCloudStack/cluster-stacks/blob/main/README.md). diff --git a/docs/quickstart.md b/docs/quickstart.md new file mode 100644 index 00000000..9cdc1313 --- /dev/null +++ b/docs/quickstart.md @@ -0,0 +1,141 @@ +# Quickstart + +This section guides you through all the necessary steps to create a workload Kubernetes cluster on top of the OpenStack infrastructure. The guide describes a path that utilizes the [clusterctl] CLI tool to manage the lifecycle of a [CAPI] management cluster and employs [kind] to create a local non-production management cluster. + +Note that it is a common practice to create a temporary, local [bootstrap cluster](https://cluster-api.sigs.k8s.io/reference/glossary#bootstrap-cluster) which is then used to provision a target [management cluster](https://cluster-api.sigs.k8s.io/reference/glossary#management-cluster) on the selected infrastructure. + +## Prerequisites + +- Install [Docker] and [kind] +- Install [kubectl] +- Install [clusterctl] +- Install [jq] +- Install [go] # installation of the Go package `envsubst` is required to enable the expansion of variables specified in CSPO and CSO manifests. + +## Initialize the management cluster + +Create the kind cluster: + +```bash +kind create cluster +``` + +Transform the Kubernetes cluster into a management cluster by using `clusterctl init` and bootstrap it with CAPI and Cluster API Provider OpenStack ([CAPO]) components: + +```bash +# Enable Cluster Class CAPI experimental feature +export CLUSTER_TOPOLOGY=true + +# Install CAPI and CAPO components +clusterctl init --infrastructure openstack +``` + +### Create a secret for OpenStack access + +To enable communication between the CSPO and the Cluster API Provider for OpenStack (CAPO) with the OpenStack API, it is necessary to generate a secret containing the access data (clouds.yaml). +Ensure that this secret is located in the identical namespace as the other Custom Resources. + +> [!NOTE] +> The default value of `cloudName` is configured as `openstack`. This setting can be overridden by including the `cloudName` key in the secret. Also, be aware that the name of the secret is expected to be `openstack` unless it is not set differently in OpenStackClusterStackReleaseTemplate in `identityRef.name` field. + +```bash +kubectl create secret generic openstack --from-file=clouds.yaml=path/to/clouds.yaml + +# Label the created secrets so they are automatically moved to the target cluster later. + +kubectl label secret openstack clusterctl.cluster.x-k8s.io/move= +``` + +### CSO and CSPO variables preparation + +The CSO and CSPO must be directed to the Cluster Stacks repository housing releases for the OpenStack provider. +Modify and export the following environment variables if you wish to redirect CSO and CSPO to an alternative Git repository + +Be aware that GitHub enforces limitations on the number of API requests per unit of time. To overcome this, +it is recommended to configure a personal access token for authenticated calls. This will significantly increase the rate limit for GitHub API requests. + +```bash +export GIT_PROVIDER_B64=Z2l0aHVi # github +export GIT_ORG_NAME_B64=U292ZXJlaWduQ2xvdWRTdGFjaw== # SovereignCloudStack +export GIT_REPOSITORY_NAME_B64=Y2x1c3Rlci1zdGFja3M= # cluster-stacks +export GIT_ACCESS_TOKEN_B64= +``` + +### CSO and CSPO deployment + +Install the [envsubst] Go package. It is required to enable the expansion of variables specified in CSPO and CSO manifests. + +```bash +GOBIN=/tmp go install github.com/drone/envsubst/v2/cmd/envsubst@latest +``` + +Get the latest CSO release version and apply CSO manifests to the management cluster. + +```bash +# Get the latest CSO release version +CSO_VERSION=$(curl https://api.github.com/repos/SovereignCloudStack/cluster-stack-operator/releases/latest -s | jq .name -r) +# Apply CSO manifests +curl -sSL https://github.com/SovereignCloudStack/cluster-stack-operator/releases/download/${CSO_VERSION}/cso-infrastructure-components.yaml | /tmp/envsubst | kubectl apply -f - +``` + +Get the latest CSPO release version and apply CSPO manifests to the management cluster. + +```bash +# Get the latest CSPO release version +CSPO_VERSION=$(curl https://api.github.com/repos/SovereignCloudStack/cluster-stack-provider-openstack/releases/latest -s | jq .name -r) +# Apply CSPO manifests +curl -sSL https://github.com/SovereignCloudStack/cluster-stack-provider-openstack/releases/download/${CSPO_VERSION}/cspo-infrastructure-components.yaml | /tmp/envsubst | kubectl apply -f - +``` + +## Create the workload cluster + +To transfer the credentials stored in the mentioned secret [above](#create-a-secret-for-openstack-access) to the operator, +create an `OpenStackClusterStackReleaseTemplate` object. +Refer to the `examples/cspotemplate.yaml` file for more details. + +Next, apply this template to the management cluster: + +```bash +kubectl apply -f +``` + +Proceed to apply the `ClusterStack` to the management cluster. For more details, refer to `examples/clusterstack.yaml`: + +```bash +kubectl apply -f +``` + +Please be patient and wait for the operator to execute the necessary tasks. +If your `ClusterStack` object encounters no errors and reports usable versions, as well as the `openstackclusterstackrelease` and `openstacknodeimagereleases` are ready, you can deploy a workload cluster. +This can be done by applying the cluster-template. +Refer to the example of this template in `examples/cluster.yaml`: + +```bash +kubectl apply -f +``` + +Utilize a convenient CLI `clusterctl` to investigate the health of the cluster: + +```bash +clusterctl describe cluster +``` + +Once the cluster is provisioned and in good health, you can retrieve its kubeconfig and establish communication with the newly created workload cluster: + +```bash +# Get the workload cluster kubeconfig +clusterctl get kubeconfig > kubeconfig.yaml +# Communicate with the workload cluster +kubectl --kubeconfig kubeconfig.yaml get nodes +``` + + +[Docker]: https://www.docker.com/ +[kind]: https://kind.sigs.k8s.io/ +[kubectl]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ +[clusterctl]: https://cluster-api.sigs.k8s.io/user/quick-start.html#install-clusterctl +[CAPO]: https://github.com/kubernetes-sigs/cluster-api-provider-openstack +[CAPI]: https://cluster-api.sigs.k8s.io/ +[go]: https://go.dev/doc/install +[envsubst]: https://github.com/drone/envsubst +[jq]: https://jqlang.github.io/jq/download/