diff --git a/docs/scalar-kubernetes/AccessScalarProducts.mdx b/docs/scalar-kubernetes/AccessScalarProducts.mdx index c161fe0f..72a716c8 100644 --- a/docs/scalar-kubernetes/AccessScalarProducts.mdx +++ b/docs/scalar-kubernetes/AccessScalarProducts.mdx @@ -6,20 +6,30 @@ This document explains how to make ScalarDB or ScalarDL deployed in a Kubernetes * Via a load balancer from outside the Kubernetes cluster. * From a bastion server by using the `kubectl port-forward` command (for testing purposes only). -The resource name `-envoy` is decided based on the helm release name. You can see the helm release name by running the `helm list` command. +The resource name `-envoy` is decided based on the helm release name. You can see the helm release name by running the following command: + +```console +helm list -n ns-scalar +``` + +You should see the following output: ```console -$ helm list -n ns-scalar NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION scalardb ns-scalar 1 2023-02-09 19:31:40.527130674 +0900 JST deployed scalardb-2.5.0 3.8.0 scalardl-auditor ns-scalar 1 2023-02-09 19:32:03.008986045 +0900 JST deployed scalardl-audit-2.5.1 3.7.1 scalardl-ledger ns-scalar 1 2023-02-09 19:31:53.459548418 +0900 JST deployed scalardl-4.5.1 3.7.1 ``` -You can also see the envoy service name `-envoy` by running the `kubectl get service` command. +You can also see the envoy service name `-envoy` by running the following command: + +```console +kubectl get service -n ns-scalar +``` + +You should see the following output: ```console -$ kubectl get service -n ns-scalar NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE scalardb-envoy LoadBalancer 10.99.245.143 60051:31110/TCP 2m2s scalardb-envoy-metrics ClusterIP 10.104.56.87 9001/TCP 2m2s diff --git a/docs/scalar-kubernetes/BackupNoSQL.mdx b/docs/scalar-kubernetes/BackupNoSQL.mdx index 7dad158c..87372f86 100644 --- a/docs/scalar-kubernetes/BackupNoSQL.mdx +++ b/docs/scalar-kubernetes/BackupNoSQL.mdx @@ -55,10 +55,15 @@ If you use Scalar Helm Charts to deploy ScalarDB or ScalarDL, the `my-svc` and ` _scalardl-auditor-admin._tcp.-headless..svc.cluster.local ``` -The helm release name decides the headless service name `-headless`. You can see the helm release name by running the `helm list` command. +The helm release name decides the headless service name `-headless`. You can see the helm release name by running the following command: + +```console +helm list -n ns-scalar +``` + +You should see the following output: ```console -$ helm list -n ns-scalar NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION scalardb ns-scalar 1 2023-02-09 19:31:40.527130674 +0900 JST deployed scalardb-2.5.0 3.8.0 scalardl-auditor ns-scalar 1 2023-02-09 19:32:03.008986045 +0900 JST deployed scalardl-audit-2.5.1 3.7.1 @@ -68,7 +73,12 @@ scalardl-ledger ns-scalar 1 2023-02-09 19:31:53.4595 You can also see the headless service name `-headless` by running the `kubectl get service` command. ```console -$ kubectl get service -n ns-scalar +kubectl get service -n ns-scalar +``` + +You should see the following output: + +```console NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE scalardb-envoy LoadBalancer 10.99.245.143 60051:31110/TCP 2m2s scalardb-envoy-metrics ClusterIP 10.104.56.87 9001/TCP 2m2s diff --git a/docs/scalar-kubernetes/K8sLogCollectionGuide.mdx b/docs/scalar-kubernetes/K8sLogCollectionGuide.mdx index 4ec3b47c..0b05553a 100644 --- a/docs/scalar-kubernetes/K8sLogCollectionGuide.mdx +++ b/docs/scalar-kubernetes/K8sLogCollectionGuide.mdx @@ -124,8 +124,13 @@ helm install scalar-logging-loki grafana/loki-stack -n monitoring -f scalar-loki If the Loki and Promtail pods are deployed properly, you can see the `STATUS` is `Running` using the `kubectl get pod -n monitoring` command. Since promtail pods are deployed as DaemonSet, the number of promtail pods depends on the number of Kubernetes nodes. In the following example, there are three worker nodes for Scalar products in the Kubernetes cluster. +```console +kubectl get pod -n monitoring ``` -$ kubectl get pod -n monitoring + +You should see the following output: + +```console NAME READY STATUS RESTARTS AGE scalar-logging-loki-0 1/1 Running 0 35m scalar-logging-loki-promtail-2fnzn 1/1 Running 0 32m diff --git a/docs/scalar-kubernetes/K8sMonitorGuide.mdx b/docs/scalar-kubernetes/K8sMonitorGuide.mdx index 895db1b3..b9ed8e9c 100644 --- a/docs/scalar-kubernetes/K8sMonitorGuide.mdx +++ b/docs/scalar-kubernetes/K8sMonitorGuide.mdx @@ -52,10 +52,15 @@ Scalar products assume the Prometheus Operator is deployed in the `monitoring` n ## Check if the Prometheus Operator is deployed -If the Prometheus Operator (includes Prometheus, Alertmanager, and Grafana) pods are deployed properly, you can see the `STATUS` is `Running` using the `kubectl get pod -n monitoring` command. +If the Prometheus Operator (includes Prometheus, Alertmanager, and Grafana) pods are deployed properly, you can see the `STATUS` is `Running` using the following command: +```console +kubectl get pod -n monitoring ``` -$ kubectl get pod -n monitoring + +You should see the following output: + +```console NAME READY STATUS RESTARTS AGE alertmanager-scalar-monitoring-kube-pro-alertmanager-0 2/2 Running 0 55s prometheus-scalar-monitoring-kube-pro-prometheus-0 2/2 Running 0 55s diff --git a/docs/scalar-kubernetes/RegularCheck.mdx b/docs/scalar-kubernetes/RegularCheck.mdx index a40e56ad..a2601408 100644 --- a/docs/scalar-kubernetes/RegularCheck.mdx +++ b/docs/scalar-kubernetes/RegularCheck.mdx @@ -17,7 +17,12 @@ What to check: * Pods are evenly distributed on the different nodes ```console -$ kubectl get pod -o wide -n +kubectl get pod -o wide -n +``` + +You should see the following output: + +```console NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES scalardb-7876f595bd-2jb28 1/1 Running 0 2m35s 10.244.2.6 k8s-worker2 scalardb-7876f595bd-rfvk6 1/1 Running 0 2m35s 10.244.1.8 k8s-worker @@ -28,7 +33,12 @@ scalardb-envoy-84c475f77b-vztqr 1/1 Running 0 2m35s 10.244.2. ``` ```console -$ kubectl get pod -n monitoring -o wide +kubectl get pod -n monitoring -o wide +``` + +You should see the following output: + +```console NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES alertmanager-scalar-monitoring-kube-pro-alertmanager-0 2/2 Running 1 (11m ago) 12m 10.244.2.4 k8s-worker2 prometheus-scalar-monitoring-kube-pro-prometheus-0 2/2 Running 0 12m 10.244.1.5 k8s-worker @@ -48,7 +58,12 @@ What to check: * `STATUS` is all `Ready` ```console -$ kubectl get nodes +kubectl get nodes +``` + +You should see the following output: + +```console NAME STATUS ROLES AGE VERSION k8s-control-plane Ready control-plane 16m v1.25.3 k8s-worker Ready 15m v1.25.3