Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: update docs for release #280

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 0 additions & 4 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,6 @@ Please take a second learn about [initium-platform code repo](https://github.com

We greatly appreciate bug fixes, documentation improvements and new features, however when contributing a new major feature, it is a good idea to idea to first open an issue, to make sure the feature it fits with the goal of the project, so we don't waste your or our time.

## General mechanics

![Inner workings of make](docs/img/inner-workings/k8s-addons-internals.png)

## How To Contribute

<a id="contributing-how-to"></a>
Expand Down
84 changes: 2 additions & 82 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,7 @@ Therefore, ArgoCD is the main requirement to run this project on your cluster.

![Quick Start](docs/img/quick-start/k8s-addons-quick-start.png)

If you dont have `argocd` on your cluster, the following command will install it with required configuration. Make sure that you are using the correct Kubernetes context before run.

```bash
$ make argocd
```
See the [Initium Quickstart](https://initium.nearform.com/getting-started/quick-start) documentation.

If you already have `argocd` (if installed with helm, the name of the chart should be argocd) deployed in your cluster to make deployment successful and all addons synced it is required to verify if following configuration is part of your ArgoCD configuration: [argocd/values.yaml](https://github.com/nearform/initium-platform/blob/main/addons/argocd/values.yaml#L23).
You can check it by describing argo-cd config map:
Expand Down Expand Up @@ -54,83 +50,7 @@ Below there is a matrix with the cloud providers & Kubernetes versions our setup
| Azure | 1.27 |


## Run locally

### Pre-requisites

You need Docker installed (or similar solutions) to run this project.

Here you can find a list of possible candidates:

- [Docker](https://docs.docker.com/engine/install/) ( cross-platform, paid solution )
- [Rancher Desktop](https://rancherdesktop.io/) ( cross-platform, FOSS )
- [lima](https://github.com/lima-vm/lima) + [nerdctl](https://github.com/containerd/nerdctl) ( macOS only )

Remember that to run this solution you also need at least:

- 4 CPU cores
- 8 GB RAM - Maybe is necessary to increase Docker limits to use more RAM or Swap to run all components with 8GB RAM.
- 16 GB Disk space

Those numbers are not written in stone so your mileage may vary depending on which components you choose to install.

> **HINT:** To run everything on Windows machine is recommended to use [WSL](https://learn.microsoft.com/en-us/windows/wsl/install) and install Docker inside this subsystem.

### Bootstrap

> **HINT:** If you want to remove the automatically installed dependencies via asdf once you're done with this project, you can run `make asdf_uninstall`.

To continue with the deployment, you need a set of tools installed first. You can either install them manually (see [.tool-versions](.tool-versions) ), or you can install them automatically by following these steps:

1. Install [`asdf-vm`](https://asdf-vm.com/)
2. Run the following command: `make asdf_install`

### Deploy

> **HINT:** If you want to know which commands are available you can always run `make help` on this project.

#### CLI

![Inner workings of make](docs/img/inner-workings/k8s-addons-internals.png)
Make sure you've followed the [bootstrap steps](#bootstrap), then:

```bash
$ make ci
```

Once completed, you're ready to interact with the cluster using `kubectl` as usual.

The current Kubernetes version that will run locally in the project is defined in the `.envrc` file, using the variable `K8S_VERSION`. If you want to run a different local k8s version, please change this value.

Github runners are using a matrix strategy to run unit and integration tests on three different k8s versions.
Take a look at the `.github/workflows/integration.yaml` or `.github/workflows/preview.yaml` for an example.

#### GUI

Make sure you've followed the [bootstrap steps](#bootstrap), then:

1. Deploy the local environment:

```bash
# Deploy the cluster and run tilt visually
$ make
```

You can access the Kind K8s cluster using any kubernetes client like kubectl or Lens.<br>
[Accessing Argocd UI](https://argo-cd.readthedocs.io/en/stable/getting_started/#3-access-the-argo-cd-api-server)

2. (Optional) Run the two following resources in tilt to portforward argocd and get the default admin password

```
- argocd-portforward
- argocd-password
```

> **PLEASE NOTE:** The port forwarding sometimes seems to drop, so re-run the tilt resource to get the connection up and running again.

3. (Optional) Test app-of-apps values changes using the override feature of the bootstrap app, following the instructions in the `./manifests/bootstrap/overrides.local.yaml.tmpl` file.

#### Cleanup
### Cleanup

> **IMPORTANT:** Make sure to run this command while tilt is NOT running.

Expand Down
170 changes: 1 addition & 169 deletions docs/ADDONS.md
Original file line number Diff line number Diff line change
@@ -1,171 +1,3 @@
# initium-platform addon list

This document is a list of addons, what they are, how to use them and their purpose in our repository. This is going to be updated as the repository grows.

It is important to emphasize that none of the following addons are strictly **required**. That's why most of them can be disabled by adding the `excluded: true` to the app-of-apps `values.yaml` file.

## Summary
- ArgoCD
- cert-manager
- Dex
- Istio
- Knative
- kube-prometheus-stack
- Additional Notes

### ArgoCD

ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes.

ArgoCD follows the GitOps pattern of using Git repositories as the source of truth for defining the desired application state. Kubernetes manifests can be specified in several ways:

- `kustomize` applications
- `helm` charts
- `jsonnet` files
- Plain directory of YAML/json manifests
- Any custom config management tool configured as a config management plugin

ArgoCD automates the deployment of the desired application states in the specified target environments. Application deployments can track updates to branches, tags, or pinned to a specific version of manifests at a Git commit. See tracking strategies for additional details about the different tracking strategies available.

We use ArgoCD in our repository for managing all the addons that will be installed on the Kubernetes clusters. It is possible to run all the addons on the same `initium-platform` revision, or pass down a specific revision to each addon, using the `app-of-apps/values.yaml` file `targetRevision` field.

More information at [ArgoCD Docs](https://argo-cd.readthedocs.io/en/stable/).

### cert-manager

*cert-manager is disabled by default*

`cert-manager` is a cloud-native certificate management solution designed to work on Kubernetes. It integrates with AWS Certificate Manager, GCP Certificate Manager, CloudFlare, Let's Encrypt, as well as local issuers and other providers to create SSL/TLS certificates. It is a member of CNCF since 2020.

`cert-manager` main responsibilities are to issue certificates and ensure they are valid and up to date, as well as attempt to renew them at a configured time before expiry.

We use `cert-manager` in this repository for managing all the SSL/TLS certificates a Kubernetes cluster might need. It is listed on our addon dictionary because most clusters need working SSL/TLS certificates for their services that are exposed to the internet.

The way `cert-manager` is set up in this repository, getting it to work once installed is just a matter of setting up a ClusterIssuer custom resource that will integrate with the desired provider (Let's Encrypt, for example), and configure secrets and desired domains.

More info at [cert-manager Docs](https://cert-manager.io/docs/).

### dex

Dex is a Federated OpenID Connect Provider, and a Sandbox project at CNCF.

Dex acts as a portal to other identity providers through “connectors.” This lets Dex defer authentication to LDAP servers, SAML providers, or established identity providers like GitHub, Google, and Active Directory. Clients write their authentication logic once to talk to Dex, then Dex handles the protocols for a given backend.

Once the user has dex up and running, the next step is to write applications that use dex to drive authentication. Apps that interact with dex generally fall into one of two categories:

- Apps that request OpenID Connect ID tokens to authenticate users.
- Used for authenticating an end user.
- Must be web based.
- Standard OAuth2 clients. Users show up at a website, and the application wants to authenticate those end users by pulling claims out of the ID token.
- Apps that consume ID tokens from other apps.
- Needs to verify that a client is acting on behalf of a user.
- These consume ID tokens as credentials.
- This lets another service handle OAuth2 flows, then use the ID token retrieved from dex to act on the end user’s behalf with the app.
- An example of an app that falls into this category is the Kubernetes API server .

More information at [Dex Docs](https://dexidp.io/docs/getting-started/).

### Istio

Istio is an open source service mesh, which is a dedicated infrastructure layer that you can add to your applications. It allows you to transparently add capabilities like observability, traffic management, and security, without adding them to your own code.

Istio provides:
- Secure service-to-service communication in a cluster with TLS encryption, strong identity-based authentication and authorization
- Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic
- Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection
- A pluggable policy layer and configuration API supporting access controls, rate limits and quotas
- Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress

More information at [Istio Docs](https://istio.io/latest/).

### Knative

Knative is a platform-agnostic solution for running serverless deployments. It has two main components called `Serving` and `Eventing`, which empower teams working with Kubernetes. They work together to automate and manage tasks and applications.

##### Serving
Knative Serving defines a set of objects as Kubernetes Custom Resource Definitions (CRDs). These resources are used to define and control how your serverless workload behaves on the cluster.

Common use cases for Knative serving are:

- Rapid deployment of serverless containers.
- Autoscaling, including scaling pods down to zero.
- Support for multiple networking layers, such as Contour, Kourier, and Istio, for integration into existing environments.

The primary Knative Serving resources are:
- Services, which automatically manage the whole lifecycle of your workload. They control the creation of other objects to ensure that your app has a route, a configuration, and a new revision for each update of the service.

- Routes, which map a network endpoint to one or more revisions.

- Configurations, which maintain the desired state for your deployment. It provides a clean separation between code and configuration and follows the Twelve-Factor App methodology. Modifying a configuration creates a new revision.

- Revisions, which is a point-in-time snapshot of the code and configuration for each modification made to the workload. Revisions are immutable objects and can be retained for as long as useful. Knative Serving Revisions can be automatically scaled up and down according to incoming traffic.

##### Eventing
Knative Eventing is a collection of APIs that enable you to use an event-driven architecture with your applications. You can use these APIs to create components that route events from event producers to event consumers, known as sinks, that receive events. Sinks can also be configured to respond to HTTP requests by sending a response event.

Knative Eventing uses standard HTTP POST requests to send and receive events between event producers and sinks. These events conform to the CloudEvents specifications, which enables creating, parsing, sending, and receiving events in any programming language.

Common use cases of Knative Eventing are:

- Publishing an event without creating a consumer.
- You can send events to a broker as an HTTP POST, and use binding to decouple the destination configuration from your application that produces events.

- Consuming an event without creating a publisher.
- You can use a trigger to consume events from a broker based on event attributes. The application receives events as an HTTP POST.

More information at [Knative Docs](https://knative.dev/docs/).

### kube-prometheus-stack

> **IMPORTANT:** This addon requires >= ArgoCD 2.5.x

`kube-prometheus-stack` is a collection of Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

We use `kube-prometheus-stack` as the main observability stack deployed on the Kubernetes cluster. It can also be tweaked with values like Grafana login credentials, and Prometheus rules, as well as ingress configurations.

More information at [kube-prometheus-stack Docs](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack).

#### ArgoCD < 2.5.x

This addon requires server deployment, which is unavailable until ArgoCD 2.5 ( see https://github.com/argoproj/argo-cd/issues/820 ). Unfortunately, there's no way to deploy it using earlier versions. To disable this addon, you can use the snippet below:

```yaml
apps:
kube-prometheus-stack:
excluded: true
```

### OpenTelemetry
OpenTelemetry is used to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) to help you analyze your software’s performance and behavior. OpenTelemetry is generally available across several languages and is suitable for use. Depending on the project requirements, the OpenTelemetry addon can be enabled and disabled via an ENV variable.

##### Collector
The OpenTelemetry Collector offers a vendor-agnostic implementation of how to receive, process and export telemetry data. It removes the need to run, operate, and maintain multiple agents/collectors.

##### Operator
The OpenTelemetry Operator is an implementation of a Kubernetes Operator, it manages collectors and auto-instrumentation of the workload using OpenTelemetry instrumentation libraries.

### Additional Notes

We are constantly evaluating new addons that might become standards in the industry. That's not high priority, though, since our main goal is to keep this repository straight to the point and minimize overhead on the users' clusters.

If you want to contribute with the repo, see [CONTRIBUTING.md](CONTRIBUTING.md).

# Override values

You can override values on the addons modifying the `app-of-apps.yaml` manifest.
Just define an `helmValues` key on the addons you want to customize eg:

```yaml
helm:
values: |
repoURL: https://github.com/nearform/initium-platform.git
subChartsRevision: v0.0.1
apps:
dex:
hemlValues:
dex-source:
fullnameOverride: dexy
```

We are using `dex-source` since that is the [alias](/addons/dex/Chart.yaml#L8) that we used for the dependency chart.
Each addon has its own alias for the dependency chart, you can find it in the specific addon `Chart.yaml` file in the [/addons](/addons) folder.
See [Initium Addons](https://initium.nearform.com/introduction/platform/addons)
Binary file modified docs/img/inner-workings/k8s-addons-internals.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Loading