Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AUTO: Docs repo sync - Scalar Kubernetes #389

Merged
merged 1 commit into from
Jul 29, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 14 additions & 4 deletions docs/scalar-kubernetes/AccessScalarProducts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,20 +6,30 @@ This document explains how to make ScalarDB or ScalarDL deployed in a Kubernetes
* Via a load balancer from outside the Kubernetes cluster.
* From a bastion server by using the `kubectl port-forward` command (for testing purposes only).

The resource name `<HELM_RELEASE_NAME>-envoy` is decided based on the helm release name. You can see the helm release name by running the `helm list` command.
The resource name `<HELM_RELEASE_NAME>-envoy` is decided based on the helm release name. You can see the helm release name by running the following command:

```console
helm list -n ns-scalar
```

You should see the following output:

```console
$ helm list -n ns-scalar
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
scalardb ns-scalar 1 2023-02-09 19:31:40.527130674 +0900 JST deployed scalardb-2.5.0 3.8.0
scalardl-auditor ns-scalar 1 2023-02-09 19:32:03.008986045 +0900 JST deployed scalardl-audit-2.5.1 3.7.1
scalardl-ledger ns-scalar 1 2023-02-09 19:31:53.459548418 +0900 JST deployed scalardl-4.5.1 3.7.1
```

You can also see the envoy service name `<HELM_RELEASE_NAME>-envoy` by running the `kubectl get service` command.
You can also see the envoy service name `<HELM_RELEASE_NAME>-envoy` by running the following command:

```console
kubectl get service -n ns-scalar
```

You should see the following output:

```console
$ kubectl get service -n ns-scalar
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
scalardb-envoy LoadBalancer 10.99.245.143 <pending> 60051:31110/TCP 2m2s
scalardb-envoy-metrics ClusterIP 10.104.56.87 <none> 9001/TCP 2m2s
Expand Down
16 changes: 13 additions & 3 deletions docs/scalar-kubernetes/BackupNoSQL.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -55,10 +55,15 @@ If you use Scalar Helm Charts to deploy ScalarDB or ScalarDL, the `my-svc` and `
_scalardl-auditor-admin._tcp.<helm release name>-headless.<namespace>.svc.cluster.local
```

The helm release name decides the headless service name `<helm release name>-headless`. You can see the helm release name by running the `helm list` command.
The helm release name decides the headless service name `<helm release name>-headless`. You can see the helm release name by running the following command:

```console
helm list -n ns-scalar
```

You should see the following output:

```console
$ helm list -n ns-scalar
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
scalardb ns-scalar 1 2023-02-09 19:31:40.527130674 +0900 JST deployed scalardb-2.5.0 3.8.0
scalardl-auditor ns-scalar 1 2023-02-09 19:32:03.008986045 +0900 JST deployed scalardl-audit-2.5.1 3.7.1
Expand All @@ -68,7 +73,12 @@ scalardl-ledger ns-scalar 1 2023-02-09 19:31:53.4595
You can also see the headless service name `<helm release name>-headless` by running the `kubectl get service` command.

```console
$ kubectl get service -n ns-scalar
kubectl get service -n ns-scalar
```

You should see the following output:

```console
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
scalardb-envoy LoadBalancer 10.99.245.143 <pending> 60051:31110/TCP 2m2s
scalardb-envoy-metrics ClusterIP 10.104.56.87 <none> 9001/TCP 2m2s
Expand Down
7 changes: 6 additions & 1 deletion docs/scalar-kubernetes/K8sLogCollectionGuide.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -124,8 +124,13 @@ helm install scalar-logging-loki grafana/loki-stack -n monitoring -f scalar-loki

If the Loki and Promtail pods are deployed properly, you can see the `STATUS` is `Running` using the `kubectl get pod -n monitoring` command. Since promtail pods are deployed as DaemonSet, the number of promtail pods depends on the number of Kubernetes nodes. In the following example, there are three worker nodes for Scalar products in the Kubernetes cluster.

```console
kubectl get pod -n monitoring
```
$ kubectl get pod -n monitoring

You should see the following output:

```console
NAME READY STATUS RESTARTS AGE
scalar-logging-loki-0 1/1 Running 0 35m
scalar-logging-loki-promtail-2fnzn 1/1 Running 0 32m
Expand Down
9 changes: 7 additions & 2 deletions docs/scalar-kubernetes/K8sMonitorGuide.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -52,10 +52,15 @@ Scalar products assume the Prometheus Operator is deployed in the `monitoring` n

## Check if the Prometheus Operator is deployed

If the Prometheus Operator (includes Prometheus, Alertmanager, and Grafana) pods are deployed properly, you can see the `STATUS` is `Running` using the `kubectl get pod -n monitoring` command.
If the Prometheus Operator (includes Prometheus, Alertmanager, and Grafana) pods are deployed properly, you can see the `STATUS` is `Running` using the following command:

```console
kubectl get pod -n monitoring
```
$ kubectl get pod -n monitoring

You should see the following output:

```console
NAME READY STATUS RESTARTS AGE
alertmanager-scalar-monitoring-kube-pro-alertmanager-0 2/2 Running 0 55s
prometheus-scalar-monitoring-kube-pro-prometheus-0 2/2 Running 0 55s
Expand Down
21 changes: 18 additions & 3 deletions docs/scalar-kubernetes/RegularCheck.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,12 @@ What to check:
* Pods are evenly distributed on the different nodes

```console
$ kubectl get pod -o wide -n <namespace>
kubectl get pod -o wide -n <namespace>
```

You should see the following output:

```console
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
scalardb-7876f595bd-2jb28 1/1 Running 0 2m35s 10.244.2.6 k8s-worker2 <none> <none>
scalardb-7876f595bd-rfvk6 1/1 Running 0 2m35s 10.244.1.8 k8s-worker <none> <none>
Expand All @@ -28,7 +33,12 @@ scalardb-envoy-84c475f77b-vztqr 1/1 Running 0 2m35s 10.244.2.
```

```console
$ kubectl get pod -n monitoring -o wide
kubectl get pod -n monitoring -o wide
```

You should see the following output:

```console
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
alertmanager-scalar-monitoring-kube-pro-alertmanager-0 2/2 Running 1 (11m ago) 12m 10.244.2.4 k8s-worker2 <none> <none>
prometheus-scalar-monitoring-kube-pro-prometheus-0 2/2 Running 0 12m 10.244.1.5 k8s-worker <none> <none>
Expand All @@ -48,7 +58,12 @@ What to check:
* `STATUS` is all `Ready`

```console
$ kubectl get nodes
kubectl get nodes
```

You should see the following output:

```console
NAME STATUS ROLES AGE VERSION
k8s-control-plane Ready control-plane 16m v1.25.3
k8s-worker Ready <none> 15m v1.25.3
Expand Down