Skip to content

Commit

Permalink
fix doc links
Browse files Browse the repository at this point in the history
  • Loading branch information
simonrondelez committed Jun 27, 2024
1 parent e6c9afa commit 253f144
Show file tree
Hide file tree
Showing 13 changed files with 44 additions and 54 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ jobs:
- name: Install dependencies
run: pip install -r requirements.txt
- name: Build site
run: mkdocs build
run: mkdocs build --strict
- name: Upload to GitHub Pages
uses: actions/upload-pages-artifact@v2
with:
Expand Down
8 changes: 4 additions & 4 deletions Concourse/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ You'll find your Concourse address in the `README.md` file of your GitHub repo.
- [Get the newest fly](#get-the-newest-fly)
- [Removing a stalled worker](#removing-a-stalled-worker)
- [Secrets](#secrets)
- [Vault - Concourse integration](#vault---concourse-integration)
- [Vault integration](#vault-integration)
- [Examples](#examples)
- [Limitations](#limitations)
- [Plain secrets.yaml file](#plain-secretsyaml-file)
Expand Down Expand Up @@ -78,7 +78,7 @@ Either download the newest binary from your Concourse web UI or execute `fly -t

You'll sometimes want to set secrets and other sensitive data in your pipelines, like AWS or DockerHub credentials. There are a couple of options to do that, if you have a Vault setup you can store your sensitive data in Vault and use it in Concourse via the [credential management support](https://concourse-ci.org/creds.html), or you can use pipeline parameters to keep your sensitive data out from the pipeline definitions.

### Vault - Concourse integration
### Vault integration

The Vault integration solution is preferred, as there are less moving parts, secrets are more secure than a plain `yaml` file, and it's a much more robust system overall. But of course it can only be used if you already have a running Vault setup.

Expand Down Expand Up @@ -116,7 +116,7 @@ At the moment, the Vault integration in Concourse is a bit limited (more info ca

This integration is still useful to store static secrets like passwords, tokens or Git keys, so we don't have to provide them as plaintext via a `secrets.yaml` file, which is also quite hard to distribute to the team. But you have to be aware that for the moment this integration won't allow you to dynamically provision AWS credentials for example.

You can find more detailed information in the [Vault specific documentation](./vault.md).
You can find more detailed information in the [Vault specific documentation](../kubernetes/vault.md).

### Plain secrets.yaml file

Expand Down Expand Up @@ -504,4 +504,4 @@ jobs:

## Feature environments

Check out the [dedicated page on Feature Environments](/Concourse/feature_environments.md) for an example on how you can implement this with Concourse.
Check out the [dedicated page on Feature Environments](./feature_environments.md) for an example on how you can implement this with Concourse.
2 changes: 1 addition & 1 deletion backups.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ In general we use the following retention periods by default. However most of th
- AWS-provided hourly snapshot, with a retention of **14 days**. Not configurable
- Skyscrapers-provided snapshot every 6 hours, with a **14 day retention**. Snapshots are stored on an encrypted S3 bucket. Configurable
- AWS ElastiCache for Redis: by default snapshots are **disabled** but is configurable
- Kubernetes state and Statefulset volumes are backed up daily through Velero with a default **14 day** retention). Further documentation: [/kubernets/backups.md](/kubernetes/backups.md). Configurable
- Kubernetes state and Statefulset volumes are backed up daily through Velero with a default **14 day** retention). Further documentation: [/kubernets/backups.md](kubernetes/backups.md). Configurable
- Kubernetes state: all objects are backed up to S3 (encrypted)
- EBS volumes: uses AWS snapshots, encrypted if the original volume is encrypted (so default **yes**, if you use the `gp2-encrypted` Storage Class is used for your PVCs)
- MongoDB: daily backup with **14 day** retention. Configurable
Expand Down
14 changes: 7 additions & 7 deletions kubernetes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ If you are new to Kubernetes, check the [getting started page](getting_started.m
- [Walk-through of the Skyscrapers' Kubernetes cluster](#walk-through-of-the-skyscrapers-kubernetes-cluster)
- [Requirements](#requirements)
- [Authentication](#authentication)
- [Deploying applications \& services on Kubernetes: the Helm Package Manager](#deploying-applications--services-on-kubernetes-the-helm-package-manager)
- [Deploying on Kubernetes with the Helm Package Manager](#deploying-on-kubernetes-with-the-helm-package-manager)
- [Ingress](#ingress)
- [HTTP traffic (ports 80 and 443)](#http-traffic-ports-80-and-443)
- [Other traffic](#other-traffic)
Expand All @@ -31,11 +31,11 @@ If you are new to Kubernetes, check the [getting started page](getting_started.m
- [Local NVMe Instance Storage](#local-nvme-instance-storage)
- [Monitoring](#monitoring)
- [Logs](#logs)
- [Cluster updates \& rollouts](#cluster-updates--rollouts)
- [Cluster updates and rollouts](#cluster-updates-and-rollouts)
- [Cronjobs](#cronjobs)
- [Cronjob Monitoring](#cronjob-monitoring)
- [Clean up](#clean-up)
- [Accessing cluster resources / services locally](#accessing-cluster-resources--services-locally)
- [Accessing cluster resources and services locally](#accessing-cluster-resources-and-services-locally)

## Requirements

Expand All @@ -62,7 +62,7 @@ aws eks update-kubeconfig --name <cluster_name> --alias <my_alias> [--role-arn <
aws eks update-kubeconfig --name production-eks-example-com --alias production --role-arn arn:aws:iam::123456789012:role/developer
```

## Deploying applications & services on Kubernetes: the Helm Package Manager
## Deploying on Kubernetes with the Helm Package Manager

After a roll out of a Kubernetes cluster, it could be tempting to start executing numerous `kubectl create` or `kubectl apply` commands to get stuff deployed on the cluster.

Expand Down Expand Up @@ -144,7 +144,7 @@ If the user has previously logged in through DEX, the flow is fully transparent
### Dynamic, whitelabel-style Ingress to your application
If your application allows for end-customers to use their custom domain, you can let your application interface directly with the K8s API to manage `Ingress` objects. For more info, check our [separate page on the subject](/kubernetes/create_ingress_via_api.md).
If your application allows for end-customers to use their custom domain, you can let your application interface directly with the K8s API to manage `Ingress` objects. For more info, check our [separate page on the subject](./create_ingress_via_api.md).

### Enabling and using the ModSecurity WAF

Expand Down Expand Up @@ -546,7 +546,7 @@ Cluster and application monitoring is a quite extensive topic by itself, so ther

Cluster and application logging is a quite extensive topic by itself, so there's a specific document for it [here](./logging.md).

## Cluster updates & rollouts
## Cluster updates and rollouts

As part of our responsibilities, we continuously roll improvements (upgrades, updates, bug fixes and new features). Depending on the type of improvement, the impact on platform usage and application varies anywhere between nothing to having (a small) downtime. Below an overview for the most common types of improvements. More exceptional types will be handled separately.

Expand Down Expand Up @@ -584,7 +584,7 @@ failedJobsHistoryLimit: 3

This will clean up all jobs except the last 3, both for successful and failed jobs.

## Accessing cluster resources / services locally
## Accessing cluster resources and services locally

One of the main challenges developers and operators face when using Kubernetes, is communication between cluster services and those running in a local workstation. This is sometimes needed to test new versions of a service for example, or to access a cluster service that's not exposed to the internet.

Expand Down
2 changes: 1 addition & 1 deletion kubernetes/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ The following are some important configuration concepts that your application wi

Now that you know about all of these kubernetes resources, it's time to tie everything together with Helm, the package manager.

- You can find why we recommend using Helm to deploy your application [here](README.md#deploying-applications--services-on-kubernetes-the-helm-package-manager)
- You can find why we recommend using Helm to deploy your application [here](README.md#deploying-on-kubernetes-with-the-helm-package-manager)

- Tutorial for creating a helm chart: <https://helm.sh/docs/chart_template_guide/getting_started/>

Expand Down
2 changes: 0 additions & 2 deletions kubernetes/monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -311,8 +311,6 @@ Here is an example for **k8s**:

### Ruby

#### Native

Prometheus has a [native client library](https://github.com/prometheus/client_ruby) for Ruby. This is a really good library when you run your ruby application as single process. Unfortunately a lot of applications use Unicorn or another multi-process integration. When using a multi-process Ruby you can best use the [fork](https://gitlab.com/gitlab-org/prometheus-client-mmap) of GitLab.

You have to integrate this library in your application and expose it as an endpoint. Once that is done, you can add a `ServiceMonitor` to scrape it.
Expand Down
7 changes: 5 additions & 2 deletions kubernetes/openvpn.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@
- [Setup OpenVPN for Linux (tested on Ubuntu 20.04 LTS)](#setup-openvpn-for-linux-tested-on-ubuntu-2004-lts)
- [GUI (NetworkManager)](#gui-networkmanager)
- [CLI](#cli)
- [Older Ubuntu versions / Troubleshooting DNS resolving](#older-ubuntu-versions--troubleshooting-dns-resolving)
- [Older Ubuntu versions](#older-ubuntu-versions)
- [Troubleshooting DNS resolving](#troubleshooting-dns-resolving)
- [Known issues](#known-issues)
- [Subnet overlap with eg docker-compose](#subnet-overlap-with-eg-docker-compose)

Expand Down Expand Up @@ -113,7 +114,9 @@ Recent Ubuntu releases use `systemd-resolved` for DNS which by default [won't ho
date: Wed, 15 Jul 2020 13:05:16 GMT
```

### Older Ubuntu versions / Troubleshooting DNS resolving
### Older Ubuntu versions

#### Troubleshooting DNS resolving

If DNS resolving for the resources behind the VPN is still not working correctly, try the following steps. After each step restart the openvpn connection and test if DNS works. You might not need all or any of these steps, depending on the state of your current system configuration.

Expand Down
8 changes: 1 addition & 7 deletions kubernetes/pods.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,7 @@ Kubernetes, an open-source container orchestration platform, revolutionizes the
- [Scheduling and reliability](#scheduling-and-reliability)
- [Autoscaling](#autoscaling)
- [Horizontal Pod Autoscaling](#horizontal-pod-autoscaling)
- [Configuration](#configuration)
- [Vertical Pod Autoscaling](#vertical-pod-autoscaling)
- [Configuration](#configuration-1)
- [Pod Disruption Budgets](#pod-disruption-budgets)
- [TL;DR](#tldr)
- [Best Practices](#best-practices)
Expand Down Expand Up @@ -139,12 +137,10 @@ We provide [KEDA (Kubernetes Event-driven Autoscaling)](https://keda.sh/) for sc
From the upstream project:
> KEDA is a [Kubernetes](<https://kubernetes.io/)-based> Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed.
> KEDA is a [Kubernetes](https://kubernetes.io/)-based> Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed.
>
> KEDA is a single-purpose and lightweight component that can be added into any Kubernetes cluster. KEDA works alongside standard Kubernetes components like the [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) and can extend functionality without overwriting or duplication. With KEDA you can explicitly map the apps you want to use event-driven scale, with other apps continuing to function. This makes KEDA a flexible and safe option to run alongside any number of any other Kubernetes applications or frameworks.
##### Configuration
Autoscaling is configured via objects
called [ScaledObject (for Deployments, StatefulSets and Custom Resources)](https://keda.sh/docs/2.11/concepts/scaling-deployments/) and [ScaledJob (for Jobs)](https://keda.sh/docs/2.11/concepts/scaling-jobs/).
Expand Down Expand Up @@ -204,8 +200,6 @@ And many more options, [check out all scalers](https://keda.sh/docs/2.6/scalers/
It can both down-scale pods that are over-requesting resources, and also up-scale pods that are under-requesting resources based on their usage over time.
##### Configuration
Autoscaling is configured with a
[Custom Resource Definition object](https://kubernetes.io/docs/concepts/api-extension/custom-resources/)
called [VerticalPodAutoscaler](https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go).
Expand Down
2 changes: 1 addition & 1 deletion kubernetes/vault.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ vault list <secret backend>/
vault read <secret backend>/<path to your secret>
```

See the [Concourse specific documentation](./concourse.md) for how Vault secrets can be used within Concourse.
See the [Concourse specific documentation](../Concourse/README.md) for how Vault secrets can be used within Concourse.

### Writing a secret to the KV secrets engine

Expand Down
1 change: 1 addition & 0 deletions mkdocs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ plugins:
- social
- tags
- same-dir
- callouts
- search:
separator: '[\s\u200b\-_,:!=\[\]()"`/]+|\.(?!\d)|&[lg]t;|(?!\b)(?=[A-Z][a-z])'
extra:
Expand Down
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,4 @@ mkdocs-material-extensions==1.3.1
pillow==10.3.0
CairoSVG==2.7.1
mkdocs-same-dir
mkdocs-callouts
9 changes: 1 addition & 8 deletions runbook.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,6 @@ In addition to the alerts listed on this page, there are other system alerts tha
- [cert-manager alerts](#cert-manager-alerts)
- [Alert Name: CertificateNotReady](#alert-name-certificatenotready)
- [Alert Name: CertificateAboutToExpire](#alert-name-certificateabouttoexpire)
- [Alert Name: CertificateAboutToExpire](#alert-name-certificateabouttoexpire-1)
- [Alert Name: AmazonMQCWExporterDown](#alert-name-amazonmqcwexporterdown)
- [Alert Name: AmazonMQMemoryAboveLimit](#alert-name-amazonmqmemoryabovelimit)
- [Alert Name: AmazonMQDiskFreeBelowLimit](#alert-name-amazonmqdiskfreebelowlimit)
Expand Down Expand Up @@ -343,13 +342,7 @@ In addition to the alerts listed on this page, there are other system alerts tha

- *Description*: `A cert-manager certificate is about to expire`
- *Severity*: `warning`
- *Action*: A certificate has less than two weeks to expire and did not get renewed, check the certificate events and the certmanager pod logs to get the reason of the failure.

### Alert Name: CertificateAboutToExpire

- *Description*: `A cert-manager certificate is expiring`
- *Severity*: `warning`
- *Action*: A certificate has less than one week to expire and did not get renewed, check the certificate events and the certmanager pod logs to get the reason of the failure.
- *Action*: A certificate has less than x weeks to expire and did not get renewed, check the certificate events and the certmanager pod logs to get the reason of the failure.

### Alert Name: AmazonMQCWExporterDown

Expand Down
Loading

0 comments on commit 253f144

Please sign in to comment.