diff --git a/versioned_docs/version-3.9/scalar-kubernetes/AccessScalarProducts.mdx b/versioned_docs/version-3.9/scalar-kubernetes/AccessScalarProducts.mdx new file mode 100644 index 00000000..543dff2e --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/AccessScalarProducts.mdx @@ -0,0 +1,195 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Make ScalarDB or ScalarDL deployed in a Kubernetes cluster environment available from applications + +This document explains how to make ScalarDB or ScalarDL deployed in a Kubernetes cluster environment available from applications. To make ScalarDB or ScalarDL available from applications, you can use Scalar Envoy via a Kubernetes service resource named `-envoy`. You can use `-envoy` in several ways, such as: + +* Directly from inside the same Kubernetes cluster as ScalarDB or ScalarDL. +* Via a load balancer from outside the Kubernetes cluster. +* From a bastion server by using the `kubectl port-forward` command (for testing purposes only). + +The resource name `-envoy` is decided based on the helm release name. You can see the helm release name by running the following command: + +```console +helm list -n ns-scalar +``` + +You should see the following output: + +```console +NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION +scalardb ns-scalar 1 2023-02-09 19:31:40.527130674 +0900 JST deployed scalardb-2.5.0 3.8.0 +scalardl-auditor ns-scalar 1 2023-02-09 19:32:03.008986045 +0900 JST deployed scalardl-audit-2.5.1 3.7.1 +scalardl-ledger ns-scalar 1 2023-02-09 19:31:53.459548418 +0900 JST deployed scalardl-4.5.1 3.7.1 +``` + +You can also see the envoy service name `-envoy` by running the following command: + +```console +kubectl get service -n ns-scalar +``` + +You should see the following output: + +```console +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +scalardb-envoy LoadBalancer 10.99.245.143 60051:31110/TCP 2m2s +scalardb-envoy-metrics ClusterIP 10.104.56.87 9001/TCP 2m2s +scalardb-headless ClusterIP None 60051/TCP 2m2s +scalardb-metrics ClusterIP 10.111.213.194 8080/TCP 2m2s +scalardl-auditor-envoy LoadBalancer 10.111.141.43 40051:31553/TCP,40052:31171/TCP 99s +scalardl-auditor-envoy-metrics ClusterIP 10.104.245.188 9001/TCP 99s +scalardl-auditor-headless ClusterIP None 40051/TCP,40053/TCP,40052/TCP 99s +scalardl-auditor-metrics ClusterIP 10.105.119.158 8080/TCP 99s +scalardl-ledger-envoy LoadBalancer 10.96.239.167 50051:32714/TCP,50052:30857/TCP 109s +scalardl-ledger-envoy-metrics ClusterIP 10.97.204.18 9001/TCP 109s +scalardl-ledger-headless ClusterIP None 50051/TCP,50053/TCP,50052/TCP 109s +scalardl-ledger-metrics ClusterIP 10.104.216.189 8080/TCP 109s +``` + +## Run application (client) requests to ScalarDB or ScalarDL via service resources directly from inside the same Kubernetes cluster + +If you deploy your application (client) in the same Kubernetes cluster as ScalarDB or ScalarDL (for example, if you deploy your application [client] on another node group or pool in the same Kubernetes cluster), the application can access ScalarDB or ScalarDL by using Kubernetes service resources. The format of the service resource name (FQDN) is `-envoy..svc.cluster.local`. + +The following are examples of ScalarDB and ScalarDL deployments on the `ns-scalar` namespace: + +* **ScalarDB Server** + ```console + scalardb-envoy.ns-scalar.svc.cluster.local + ``` +* **ScalarDL Ledger** + ```console + scalardl-ledger-envoy.ns-scalar.svc.cluster.local + ``` +* **ScalarDL Auditor** + ```console + scalardl-auditor-envoy.ns-scalar.svc.cluster.local + ``` + +When using the Kubernetes service resource, you must set the above FQDN in the properties file for the application (client) as follows: + +* **Client properties file for ScalarDB Server** + ```properties + scalar.db.contact_points=-envoy..svc.cluster.local + scalar.db.contact_port=60051 + scalar.db.storage=grpc + scalar.db.transaction_manager=grpc + ``` +* **Client properties file for ScalarDL Ledger** + ```properties + scalar.dl.client.server.host=-envoy..svc.cluster.local + scalar.dl.ledger.server.port=50051 + scalar.dl.ledger.server.privileged_port=50052 + ``` +* **Client properties file for ScalarDL Ledger with ScalarDL Auditor mode enabled** + ```properties + # Ledger + scalar.dl.client.server.host=-envoy..svc.cluster.local + scalar.dl.ledger.server.port=50051 + scalar.dl.ledger.server.privileged_port=50052 + + # Auditor + scalar.dl.client.auditor.enabled=true + scalar.dl.client.auditor.host=-envoy..svc.cluster.local + scalar.dl.auditor.server.port=40051 + scalar.dl.auditor.server.privileged_port=40052 + ``` + +## Run application (client) requests to ScalarDB or ScalarDL via load balancers from outside the Kubernetes cluster + +If you deploy your application (client) in an environment outside the Kubernetes cluster for ScalarDB or ScalarDL (for example, if you deploy your application [client] on another Kubernetes cluster, container platform, or server), the application can access ScalarDB or ScalarDL by using a load balancer that each cloud service provides. + +You can create a load balancer by setting `envoy.service.type` to `LoadBalancer` in your custom values file. After configuring the custom values file, you can use Scalar Envoy through a Kubernetes service resource by using the load balancer. You can also set the load balancer configurations by using annotations. + +For more details on how to configure your custom values file, see [Service configurations](../helm-charts/configure-custom-values-envoy.mdx#service-configurations). + +When using a load balancer, you must set the FQDN or IP address of the load balancer in the properties file for the application (client) as follows. + +* **Client properties file for ScalarDB Server** + ```properties + scalar.db.contact_points= + scalar.db.contact_port=60051 + scalar.db.storage=grpc + scalar.db.transaction_manager=grpc + ``` +* **Client properties file for ScalarDL Ledger** + ```properties + scalar.dl.client.server.host= + scalar.dl.ledger.server.port=50051 + scalar.dl.ledger.server.privileged_port=50052 + ``` +* **Client properties file for ScalarDL Ledger with ScalarDL Auditor mode enabled** + ```properties + # Ledger + scalar.dl.client.server.host= + scalar.dl.ledger.server.port=50051 + scalar.dl.ledger.server.privileged_port=50052 + + # Auditor + scalar.dl.client.auditor.enabled=true + scalar.dl.client.auditor.host= + scalar.dl.auditor.server.port=40051 + scalar.dl.auditor.server.privileged_port=40052 + ``` + +The concrete implementation of the load balancer and access method depend on the Kubernetes cluster. If you are using a managed Kubernetes cluster, see the following official documentation based on your cloud service provider: + +* **Amazon Elastic Kubernetes Service (EKS)** + * [Network load balancing on Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html) +* **Azure Kubernetes Service (AKS)** + * [Use a public standard load balancer in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard) + * [Use an internal load balancer with Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/internal-lb) + +## Run client requests to ScalarDB or ScalarDL from a bastion server (for testing purposes only; not recommended in a production environment) + +You can run client requests to ScalarDB or ScalarDL from a bastion server by running the `kubectl port-forward` command. If you create a ScalarDL Auditor mode environment, however, you must run two `kubectl port-forward` commands with different kubeconfig files from one bastion server to access two Kubernetes clusters. + +1. **(ScalarDL Auditor mode only)** In the bastion server for ScalarDL Ledger, configure an existing kubeconfig file or add a new kubeconfig file to access the Kubernetes cluster for ScalarDL Auditor. For details on how to configure the kubeconfig file of each managed Kubernetes cluster, see [Configure kubeconfig](CreateBastionServer.mdx#configure-kubeconfig). +2. Configure port forwarding to each service from the bastion server. + * **ScalarDB Server** + ```console + kubectl port-forward -n svc/-envoy 60051:60051 + ``` + * **ScalarDL Ledger** + ```console + kubectl --context port-forward -n svc/-envoy 50051:50051 + kubectl --context port-forward -n svc/-envoy 50052:50052 + ``` + * **ScalarDL Auditor** + ```console + kubectl --context port-forward -n svc/-envoy 40051:40051 + kubectl --context port-forward -n svc/-envoy 40052:40052 + ``` +3. Configure the properties file to access ScalarDB or ScalarDL via `localhost`. + * **Client properties file for ScalarDB Server** + ```properties + scalar.db.contact_points=localhost + scalar.db.contact_port=60051 + scalar.db.storage=grpc + scalar.db.transaction_manager=grpc + ``` + * **Client properties file for ScalarDL Ledger** + ```properties + scalar.dl.client.server.host=localhost + scalar.dl.ledger.server.port=50051 + scalar.dl.ledger.server.privileged_port=50052 + ``` + * **Client properties file for ScalarDL Ledger with ScalarDL Auditor mode enabled** + ```properties + # Ledger + scalar.dl.client.server.host=localhost + scalar.dl.ledger.server.port=50051 + scalar.dl.ledger.server.privileged_port=50052 + + # Auditor + scalar.dl.client.auditor.enabled=true + scalar.dl.client.auditor.host=localhost + scalar.dl.auditor.server.port=40051 + scalar.dl.auditor.server.privileged_port=40052 + ``` + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/AwsMarketplaceGuide.mdx b/versioned_docs/version-3.9/scalar-kubernetes/AwsMarketplaceGuide.mdx new file mode 100644 index 00000000..2b22586b --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/AwsMarketplaceGuide.mdx @@ -0,0 +1,501 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# How to install Scalar products through AWS Marketplace + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +Scalar products (ScalarDB, ScalarDL, and their tools) are available in the AWS Marketplace as container images. This guide explains how to install Scalar products through the AWS Marketplace. + +:::note + +- Some Scalar products are available under commercial licenses, and the AWS Marketplace provides those products as pay-as-you-go (PAYG) pricing. When you use pay-as-you-go pricing, AWS will charge you the Scalar product license fee based on your usage. +- Previously, a bring-your-own-license (BYOL) option was offered in the AWS Marketplace. However, that option has been deprecated and removed, so it is no longer supported in the AWS Marketplace. +- A BYOL option is provided in the following public container repositories outside of the AWS Marketplace. If you don't have a license key, please [contact us](https://www.scalar-labs.com/contact-us). + - [ScalarDB Cluster Enterprise Standard](https://github.com/orgs/scalar-labs/packages/container/package/scalardb-cluster-node-byol-standard) + - [ScalarDB Cluster Enterprise Premium](https://github.com/orgs/scalar-labs/packages/container/package/scalardb-cluster-node-byol-premium) + - [ScalarDL Ledger](https://github.com/orgs/scalar-labs/packages/container/package/scalardl-ledger-byol) + - [ScalarDL Auditor](https://github.com/orgs/scalar-labs/packages/container/package/scalardl-auditor-byol) + +::: + +## Subscribe to Scalar products from AWS Marketplace + +1. Select your Scalar product to see the links to the AWS Marketplace. + + + + Select your edition of ScalarDB Enterprise. + + + | PAYG | BYOL (Deprecated) | + |:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| + | [ScalarDB Cluster](https://aws.amazon.com/marketplace/pp/prodview-jx6qxatkxuwm4) | [ScalarDB Cluster](https://aws.amazon.com/marketplace/pp/prodview-alcwrmw6v4cfy) | + + + | PAYG | BYOL (Deprecated) | + |:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------| + | [ScalarDB Cluster](https://aws.amazon.com/marketplace/pp/prodview-djqw3zk6dwyk6) | [ScalarDB Cluster](https://aws.amazon.com/marketplace/pp/prodview-alcwrmw6v4cfy) | + + + + + | PAYG | BYOL (Deprecated) | + |:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| + | [ScalarDL Ledger](https://aws.amazon.com/marketplace/pp/prodview-wttioaezp5j6e) | [ScalarDL Ledger](https://aws.amazon.com/marketplace/pp/prodview-3jdwfmqonx7a2) | + + + | PAYG | BYOL (Deprecated) | + |:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| + | [ScalarDL Auditor](https://aws.amazon.com/marketplace/pp/prodview-ke3yiw4mhriuu) | [ScalarDL Auditor](https://aws.amazon.com/marketplace/pp/prodview-tj7svy75gu7m6) | + + + | PAYG | BYOL | + |:-------------------------------------------------------------------------------|:-------------------------------------------| + | [Scalar Manager](https://aws.amazon.com/marketplace/pp/prodview-gfyn6ipmxf2hq) | Scalar Manager doesn't have a BYOL option. | + + + +1. Select **Continue to Subscribe**. + +1. Sign in to AWS Marketplace using your IAM user. + If you have already signed in, this step will be skipped automatically. + +1. Read the **Terms and Conditions** and select **Accept Terms**. + It takes some time. When it's done, you can see the current date in the **Effective date** column. + Also, you can see our products on the [Manage subscriptions](https://us-east-1.console.aws.amazon.com/marketplace/home#/subscriptions) page of AWS Console. + +## **[Pay-As-You-Go]** Deploy containers on EKS (Amazon Elastic Kubernetes Service) from AWS Marketplace using Scalar Helm Charts + +By subscribing to Scalar products in the AWS Marketplace, you can pull the container images of Scalar products from the private container registry ([ECR](https://aws.amazon.com/ecr/)) of the AWS Marketplace. This section explains how to deploy Scalar products with pay-as-you-go pricing in your [EKS](https://aws.amazon.com/eks/) cluster from the private container registry. + +1. Create an OIDC provider. + + You must create an identity and access management (IAM) OpenID Connect (OIDC) provider to run the AWS Marketplace Metering Service from ScalarDL pods. + + ```console + eksctl utils associate-iam-oidc-provider --region --cluster --approve + ``` + + For details, see [Creating an IAM OIDC provider for your cluster](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html). + +1. Create a service account. + + To allow your pods to run the AWS Marketplace Metering Service, you can use [IAM roles for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html). + + ```console + eksctl create iamserviceaccount \ + --name \ + --namespace \ + --region \ + --cluster \ + --attach-policy-arn arn:aws:iam::aws:policy/AWSMarketplaceMeteringFullAccess \ + --approve \ + --override-existing-serviceaccounts + ``` + +1. Update the custom values file of the Helm Chart for the Scalar product that you want to install. + You need to specify the private container registry (ECR) of the AWS Marketplace as the value for `[].image.repository` in the custom values file. You also need to specify the service account name that you created in the previous step as the value for `[].serviceAccount.serviceAccountName` and set `[].serviceAccount.automountServiceAccountToken` to `true`. See the following examples based on the product you're using. + + + + Select your edition of ScalarDB Enterprise. + + + In the `scalardb-cluster-standard-custom-values.yaml` file: + + ```yaml + scalardbCluster: + image: + repository: "709825985650.dkr.ecr.us-east-1.amazonaws.com/scalar/scalardb-cluster-node-aws-payg-standard" + serviceAccount: + serviceAccountName: "" + automountServiceAccountToken: true + ``` + + :::note + + For more details on the configurations, see [Configure a custom values file for ScalarDB Cluster](../helm-charts/configure-custom-values-scalardb-cluster.mdx). + + ::: + + + + In the `scalardb-cluster-premium-custom-values.yaml` file: + + ```yaml + scalardbCluster: + image: + repository: "709825985650.dkr.ecr.us-east-1.amazonaws.com/scalar/scalardb-cluster-node-aws-payg-premium" + serviceAccount: + serviceAccountName: "" + automountServiceAccountToken: true + ``` + + :::note + + For more details on the configurations, see [Configure a custom values file for ScalarDB Cluster](../helm-charts/configure-custom-values-scalardb-cluster.mdx). + + ::: + + + + + +

ScalarDL Ledger

+ + In the `scalardl-ledger-custom-values.yaml` file: + + ```yaml + ledger: + image: + repository: "709825985650.dkr.ecr.us-east-1.amazonaws.com/scalar/scalardl-ledger-aws-payg" + serviceAccount: + serviceAccountName: "" + automountServiceAccountToken: true + ``` + + :::note + + For more details on the configurations, see [Configure a custom values file for ScalarDL Ledger](../helm-charts/configure-custom-values-scalardl-ledger.mdx). + + ::: + +

ScalarDL Schema Loader for Ledger

+ + You don't need to update the `[].image.repository` configuration in your `schema-loader-ledger-custom-values.yaml` file. The container image of ScalarDL Schema Loader is provided in the [public container repository](https://github.com/orgs/scalar-labs/packages/container/package/scalardl-schema-loader). + + :::note + + For more details on the configurations, see [Configure a custom values file for ScalarDL Schema Loader](../helm-charts/configure-custom-values-scalardl-schema-loader.mdx). + + ::: + +
+ +

ScalarDL Auditor

+ + In the `scalardl-auditor-custom-values.yaml` file: + + ```yaml + auditor: + image: + repository: "709825985650.dkr.ecr.us-east-1.amazonaws.com/scalar/scalardl-auditor-aws-payg" + serviceAccount: + serviceAccountName: "" + automountServiceAccountToken: true + ``` + + :::note + + For more details on the configurations, see [Configure a custom values file for ScalarDL Auditor](../helm-charts/configure-custom-values-scalardl-auditor.mdx). + + ::: + +

ScalarDL Schema Loader for Auditor

+ + You don't need to update the `[].image.repository` configuration in your `schema-loader-auditor-custom-values.yaml` file. The container image of ScalarDL Schema Loader is provided in the [public container repository](https://github.com/orgs/scalar-labs/packages/container/package/scalardl-schema-loader). + + :::note + + For more details on the configurations, see [Configure a custom values file for ScalarDL Schema Loader](../helm-charts/configure-custom-values-scalardl-schema-loader.mdx). + + ::: + +
+ + In the `scalar-manager-custom-values.yaml` file: + + ```yaml + api: + image: + repository: "709825985650.dkr.ecr.us-east-1.amazonaws.com/scalar/scalar-manager-api-aws-payg" + web: + image: + repository: "709825985650.dkr.ecr.us-east-1.amazonaws.com/scalar/scalar-manager-web-aws-payg" + serviceAccount: + serviceAccountName: "" + automountServiceAccountToken: true + ``` + + :::note + + For more details on the configurations, see [Configure a custom values file for Scalar Manager](../helm-charts/configure-custom-values-scalar-manager.mdx). + + ::: + + +
+ +1. Deploy Scalar products by using Helm Charts in conjunction with the above custom values files. See the following examples based on the product you're using. + + + + Select your edition of ScalarDB Enterprise. + + + ```console + helm install scalardb-cluster-standard scalar-labs/scalardb-cluster -f scalardb-cluster-standard-custom-values.yaml + ``` + + + ```console + helm install scalardb-cluster-premium scalar-labs/scalardb-cluster -f scalardb-cluster-premium-custom-values.yaml + ``` + + + + +

ScalarDL Ledger

+ + ```console + helm install scalardl-ledger scalar-labs/scalardl -f ./scalardl-ledger-custom-values.yaml + ``` + +

ScalarDL Schema Loader for Ledger

+ + ```console + helm install schema-loader scalar-labs/schema-loading -f ./schema-loader-ledger-custom-values.yaml + ``` +
+ +

ScalarDL Auditor

+ + ```console + helm install scalardl-auditor scalar-labs/scalardl-audit -f ./scalardl-auditor-custom-values.yaml + ``` + +

ScalarDL Schema Loader for Auditor

+ + ```console + helm install schema-loader scalar-labs/schema-loading -f ./schema-loader-auditor-custom-values.yaml + ``` +
+ + ```console + helm install scalar-manager scalar-labs/scalar-manager -f ./scalar-manager-custom-values.yaml + ``` + +
+ +## **[Deprecated] [BYOL]** Deploy containers on EKS (Amazon Elastic Kubernetes Service) from AWS Marketplace using Scalar Helm Charts + +By subscribing to Scalar products in the AWS Marketplace, you can pull the container images of Scalar products from the private container registry ([ECR](https://aws.amazon.com/ecr/)) of the AWS Marketplace. This section explains how to deploy Scalar products with the BYOL option in your [EKS](https://aws.amazon.com/eks/) cluster from the private container registry. + +1. Update the custom values file of the Helm Chart for the Scalar product that you want to install. + You need to specify the private container registry (ECR) of AWS Marketplace as the value of `[].image.repository` in the custom values file. See the following examples based on the product you're using. + + + + ```yaml + scalardbCluster: + image: + repository: "709825985650.dkr.ecr.us-east-1.amazonaws.com/scalar/scalardb-cluster-node-aws-byol" + ``` + + :::note + + For more details on the configurations, see [Configure a custom values file for ScalarDB Cluster](../helm-charts/configure-custom-values-scalardb-cluster.mdx). + + ::: + + + +

ScalarDL Ledger

+ + In the `scalardl-ledger-custom-values.yaml` file: + + ```yaml + ledger: + image: + repository: "709825985650.dkr.ecr.us-east-1.amazonaws.com/scalar/scalar-ledger" + ``` + + :::note + + For more details on the configurations, see [Configure a custom values file for ScalarDL Ledger](../helm-charts/configure-custom-values-scalardl-ledger.mdx). + + ::: + +

ScalarDL Schema Loader for Ledger

+ + You don't need to update the `[].image.repository` configuration in your `schema-loader-ledger-custom-values.yaml` file. The container image of ScalarDL Schema Loader is provided in the [public container repository](https://github.com/orgs/scalar-labs/packages/container/package/scalardl-schema-loader). + + :::note + + For more details on the configurations, see [Configure a custom values file for ScalarDL Schema Loader](../helm-charts/configure-custom-values-scalardl-schema-loader.mdx). + + ::: + +
+ +

ScalarDL Auditor

+ + In the `scalardl-auditor-custom-values.yaml` file: + + ```yaml + auditor: + image: + repository: "709825985650.dkr.ecr.us-east-1.amazonaws.com/scalar/scalar-auditor" + ``` + + :::note + + For more details on the configurations, see [Configure a custom values file for ScalarDL Auditor](../helm-charts/configure-custom-values-scalardl-auditor.mdx). + + ::: + +

ScalarDL Schema Loader for Auditor

+ + You don't need to update the `[].image.repository` configuration in your `schema-loader-auditor-custom-values.yaml` file. The container image of ScalarDL Schema Loader is provided in the [public container repository](https://github.com/orgs/scalar-labs/packages/container/package/scalardl-schema-loader). + + :::note + + For more details on the configurations, see [Configure a custom values file for ScalarDL Schema Loader](../helm-charts/configure-custom-values-scalardl-schema-loader.mdx). + + ::: + +
+
+ +1. Deploy the Scalar products using the Helm Chart with the above custom values files. See the following examples based on the product you're using. See the following examples based on the product you're using. + + + + ```console + helm install scalardb-cluster scalar-labs/scalardb-cluster -f scalardb-cluster-custom-values.yaml + ``` + + +

ScalarDL Ledger

+ + ```console + helm install scalardl-ledger scalar-labs/scalardl -f ./scalardl-ledger-custom-values.yaml + ``` + +

ScalarDL Schema Loader for Ledger

+ + ```console + helm install schema-loader scalar-labs/schema-loading -f ./schema-loader-ledger-custom-values.yaml + ``` +
+ +

ScalarDL Auditor

+ + ```console + helm install scalardl-auditor scalar-labs/scalardl-audit -f ./scalardl-auditor-custom-values.yaml + ``` + +

ScalarDL Schema Loader for Auditor

+ + ```console + helm install schema-loader scalar-labs/schema-loading -f ./schema-loader-auditor-custom-values.yaml + ``` +
+
+ +## **[Deprecated] [BYOL]** Deploy containers on Kubernetes other than EKS from AWS Marketplace using Scalar Helm Charts + +1. Install the `aws` command according to the [AWS Official Document (Installing or updating the latest version of the AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). + +1. Configure the AWS CLI with your credentials according to the [AWS Official Document (Configuration basics)](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html). + +1. Create a `reg-ecr-mp-secrets` secret resource for pulling the container images from the ECR of AWS Marketplace. + ```console + kubectl create secret docker-registry reg-ecr-mp-secrets \ + --docker-server=709825985650.dkr.ecr.us-east-1.amazonaws.com \ + --docker-username=AWS \ + --docker-password=$(aws ecr get-login-password --region us-east-1) + ``` + +1. Update the custom values file of the Helm Chart for the Scalar product that you want to install. + You need to specify the private container registry (ECR) of AWS Marketplace as the value of `[].image.repository` in the custom values file. + Also, you need to specify the `reg-ecr-mp-secrets` as the value of `[].imagePullSecrets`. See the following examples based on the product you're using. + + + + ```yaml + scalardbCluster: + image: + repository: "709825985650.dkr.ecr.us-east-1.amazonaws.com/scalar/scalardb-cluster-node-aws-byol" + imagePullSecrets: + - name: "reg-ecr-mp-secrets" + ``` + + :::note + + For more details on the configurations, see [Configure a custom values file for ScalarDB Cluster](../helm-charts/configure-custom-values-scalardb-cluster.mdx). + + ::: + + + +

ScalarDL Ledger

+ + In the `scalardl-ledger-custom-values.yaml` file: + + ```yaml + ledger: + image: + repository: "709825985650.dkr.ecr.us-east-1.amazonaws.com/scalar/scalar-ledger" + imagePullSecrets: + - name: "reg-ecr-mp-secrets" + ``` + + :::note + + For more details on the configurations, see [Configure a custom values file for ScalarDL Ledger](../helm-charts/configure-custom-values-scalardl-ledger.mdx). + + ::: + +

ScalarDL Schema Loader for Ledger

+ + You don't need to update the `[].image.repository` configuration in your `schema-loader-ledger-custom-values.yaml` file. The container image of ScalarDL Schema Loader is provided in the [public container repository](https://github.com/orgs/scalar-labs/packages/container/package/scalardl-schema-loader). + + :::note + + For more details on the configurations, see [Configure a custom values file for ScalarDL Schema Loader](../helm-charts/configure-custom-values-scalardl-schema-loader.mdx). + + ::: + +
+ +

ScalarDL Auditor

+ + In the `scalardl-auditor-custom-values.yaml` file: + + ```yaml + auditor: + image: + repository: "709825985650.dkr.ecr.us-east-1.amazonaws.com/scalar/scalar-auditor" + imagePullSecrets: + - name: "reg-ecr-mp-secrets" + ``` + + :::note + + For more details on the configurations, see [Configure a custom values file for ScalarDL Auditor](../helm-charts/configure-custom-values-scalardl-auditor.mdx). + + ::: + +

ScalarDL Schema Loader for Auditor

+ + You don't need to update the `[].image.repository` configuration in your `schema-loader-auditor-custom-values.yaml` file. The container image of ScalarDL Schema Loader is provided in the [public container repository](https://github.com/orgs/scalar-labs/packages/container/package/scalardl-schema-loader). + + :::note + + For more details on the configurations, see [Configure a custom values file for ScalarDL Schema Loader](../helm-charts/configure-custom-values-scalardl-schema-loader.mdx). + + ::: + +
+
+ +1. Deploy the Scalar products using the Helm Chart with the above custom values files. + * Examples + Please refer to the **[Deprecated] [BYOL] Deploy containers on EKS (Amazon Elastic Kubernetes Service) from AWS Marketplace using Scalar Helm Charts** section of this document. diff --git a/versioned_docs/version-3.9/scalar-kubernetes/AzureMarketplaceGuide.mdx b/versioned_docs/version-3.9/scalar-kubernetes/AzureMarketplaceGuide.mdx new file mode 100644 index 00000000..da3612a1 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/AzureMarketplaceGuide.mdx @@ -0,0 +1,235 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# How to install Scalar products through Azure Marketplace + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +:::warning + +Scalar products are currently not available in Azure Marketplace. For details on other ways to get the container images of Scalar products, please see [How to get the container images of Scalar products](./HowToGetContainerImages.mdx). + +::: + +Scalar products (ScalarDB, ScalarDL, and their tools) are provided in Azure Marketplace as container offers. This guide explains how to install Scalar products through Azure Marketplace. + +Note that some Scalar products are licensed under commercial licenses, and the Azure Marketplace provides them as BYOL (Bring Your Own License). Please make sure you have appropriate licenses. + +## Get Scalar products from Microsoft Azure Marketplace + +1. Select your Scalar product to see the links to the Microsoft Azure Marketplace. + + + + - [ScalarDB](https://azuremarketplace.microsoft.com/en/marketplace/apps/scalarinc.scalardb) + + + - [ScalarDL](https://azuremarketplace.microsoft.com/en/marketplace/apps/scalarinc.scalardl) + + + +1. Select **Get It Now**. + +1. Sign in to Azure Marketplace using your work email address. + Please use the work email address that is used as an account of Microsoft Azure. + If you have already signed in, this step will be skipped automatically. + +1. Input your information. +Note that **Company** is not required, but please enter it. + +1. Select a **Software plan** you need from the pull-down. + **Software plan** means a combination of the container image and the license. Please select the *Software plan* you use. + +1. Select **Continue**. + After selecting the **Continue**, it automatically moves to the Azure Portal. + +1. Create a private container registry (Azure Container Registry). + Follow the on-screen instructions, please create your private container registry. + The container images of Scalar products will be copied to your private container registry. + +1. Repeat these steps as needed. + You need several container images to run Scalar products on Kubernetes, but Azure Marketplace copies only one container image at a time. So, you need to subscribe to several software plans (repeat subscribe operation) as needed. + - Container images that you need are the following. Select your Scalar product to see details about the container images. + + + + - ScalarDB Cluster (BYOL) + - [Deprecated] ScalarDB Server Default (2vCPU, 4GiB Memory) + - [Deprecated] ScalarDB GraphQL Server (optional) + - [Deprecated] ScalarDB SQL Server (optional) + + + - ScalarDL Ledger Default (2vCPU, 4GiB Memory) + - ScalarDL Auditor Default (2vCPU, 4GiB Memory) + - **ScalarDL Auditor** is optional. If you use **ScalarDL Auditor**, subscribe to it. + - ScalarDL Schema Loader + + + +Now, you can pull the container images of the Scalar products from your private container registry. +Please refer to the [Azure Container Registry documentation](https://docs.microsoft.com/en-us/azure/container-registry/) for more details about the Azure Container Registry. + +## Deploy containers on AKS (Azure Kubernetes Service) from your private container registry using Scalar Helm Charts + +1. Specify your private container registry (Azure Container Registry) when you create an AKS cluster. + * GUI (Azure Portal) + At the **Azure Container Registry** parameter in the **Integrations** tab, please specify your private container registry. + * CLI ([az aks create](https://docs.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-create) command) + Please specify `--attach-acr` flag with the name of your private container registry. Also, you can configure Azure Container Registry integration for existing AKS clusters using [az aks update](https://docs.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-update) command with `--attach-acr` flag. Please refer to the [Azure Official Document](https://docs.microsoft.com/en-us/azure/aks/cluster-container-registry-integration) for more details. + +1. Update the custom values file of the Helm Chart of a Scalar product you want to install. You need to specify your private container registry as the value of `[].image.repository` in the custom values file. See the following examples based on the product you're using. + + + + ```yaml + scalardbCluster: + image: + repository: "example.azurecr.io/scalarinc/scalardb-cluster-node-azure-byol" + ``` + + + Select the ScalarDL product you're using. + + + In the `scalardl-ledger-custom-values.yaml` file: + + ```yaml + ledger: + image: + repository: "example.azurecr.io/scalarinc/scalar-ledger" + ``` + + + In the `scalardl-auditor-custom-values.yaml` file: + + ```yaml + auditor: + image: + repository: "example.azurecr.io/scalarinc/scalar-auditor" + ``` + + + In the `schema-loader-custom-values.yaml` file: + + ```yaml + schemaLoading: + image: + repository: "example.azurecr.io/scalarinc/scalardl-schema-loader" + ``` + + + + + +1. Deploy the Scalar product using the Helm Chart with the above custom values file. See the following examples based on the product you're using. + + + + ```console + helm install scalardb-cluster scalar-labs/scalardb-cluster -f scalardb-cluster-custom-values.yaml + ``` + + + Select the ScalarDL product you're using. + + + ```console + helm install scalardl-ledger scalar-labs/scalardl -f ./scalardl-ledger-custom-values.yaml + ``` + + + ```console + helm install scalardl-auditor scalar-labs/scalardl-audit -f ./scalardl-auditor-custom-values.yaml + ``` + + + ```console + helm install schema-loader scalar-labs/schema-loading -f ./schema-loader-custom-values.yaml + ``` + + + + + +## Deploy containers on Kubernetes other than AKS (Azure Kubernetes Service) from your private container registry using Scalar Helm Charts + +1. Install the `az` command according to the [Azure Official Document (How to install the Azure CLI)](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli). + +1. Sign in with Azure CLI. + ```console + az login + ``` + +1. Create a **service principal** for authentication to your private container registry according to the [Azure Official Document (Azure Container Registry authentication with service principals)](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-auth-service-principal). + We use the **Service principal ID** and the **Service principal password** in the next step. + +1. Create a `reg-acr-secrets` secret resource for pulling the container images from your private container registry. + ```console + kubectl create secret docker-registry reg-acr-secrets \ + --docker-server= \ + --docker-username= \ + --docker-password= + ``` + +1. Update the custom values file of the Helm Chart of a Scalar product you want to install. + You need to specify your private container registry as the value of `[].image.repository` in the custom values file. + Also, you need to specify the `reg-acr-secrets` as the value of `[].imagePullSecrets`. See the following examples based on the product you're using. + + + + ```yaml + scalardbCluster: + image: + repository: "example.azurecr.io/scalarinc/scalardb-cluster-node-azure-byol" + imagePullSecrets: + - name: "reg-acr-secrets" + ``` + + + Select the ScalarDL product you're using. + + + In the `scalardl-ledger-custom-values.yaml` file: + + ```yaml + ledger: + image: + repository: "example.azurecr.io/scalarinc/scalar-ledger" + imagePullSecrets: + - name: "reg-acr-secrets" + ``` + + + In the `scalardl-auditor-custom-values.yaml` file: + + ```yaml + auditor: + image: + repository: "example.azurecr.io/scalarinc/scalar-auditor" + imagePullSecrets: + - name: "reg-acr-secrets" + ``` + + + In the `schema-loader-custom-values.yaml` file: + + ```yaml + schemaLoading: + image: + repository: "example.azurecr.io/scalarinc/scalardl-schema-loader" + imagePullSecrets: + - name: "reg-acr-secrets" + ``` + + + + + +1. Deploy the Scalar product using the Helm Chart with the above custom values file. + * Examples + Please refer to the **Deploy containers on AKS (Azure Kubernetes Service) from your private container registry using Scalar Helm Charts** section of this document. diff --git a/versioned_docs/version-3.9/scalar-kubernetes/BackupNoSQL.mdx b/versioned_docs/version-3.9/scalar-kubernetes/BackupNoSQL.mdx new file mode 100644 index 00000000..6882c8b1 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/BackupNoSQL.mdx @@ -0,0 +1,149 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Back up a NoSQL database in a Kubernetes environment + +This guide explains how to create a transactionally consistent backup of managed databases that ScalarDB or ScalarDL uses in a Kubernetes environment. Please note that, when using a NoSQL database or multiple databases, you **must** pause ScalarDB or ScalarDL to create a transactionally consistent backup. + +For details on how ScalarDB backs up databases, see [A Guide on How to Backup and Restore Databases Used Through ScalarDB](https://scalardb.scalar-labs.com/docs/latest/backup-restore/). + +In this guide, we assume that you are using point-in-time recovery (PITR) or its equivalent features. Therefore, we must create a period where there are no ongoing transactions for restoration. You can then restore data to that specific period by using PITR. If you restore data to a time without creating a period where there are no ongoing transactions, the restored data could be transactionally inconsistent, causing ScalarDB or ScalarDL to not work properly with the data. + +## Create a period to restore data, and perform a backup + +1. Check the following four points by running the `kubectl get pod` command before starting the backup operation: + * **The number of ScalarDB or ScalarDL pods.** Write down the number of pods so that you can compare that number with the number of pods after performing the backup. + * **The ScalarDB or ScalarDL pod names in the `NAME` column.** Write down the pod names so that you can compare those names with the pod names after performing the backup. + * **The ScalarDB or ScalarDL pod status is `Running` in the `STATUS` column.** Confirm that the pods are running before proceeding with the backup. You will need to pause the pods in the next step. + * **The restart count of each pod in the `RESTARTS` column.** Write down the restart count of each pod so that you can compare the count with the restart counts after performing the backup. +2. Pause the ScalarDB or ScalarDL pods by using `scalar-admin`. For details on how to pause the pods, see the [Details on using `scalar-admin`](BackupNoSQL.mdx#details-on-using-scalar-admin) section in this guide. +3. Write down the `pause completed` time. You will need to refer to that time when restoring the data by using the PITR feature. +4. Back up each database by using the backup feature. If you have enabled the automatic backup and PITR features, the managed databases will perform back up automatically. Please note that you should wait for approximately 10 seconds so that you can create a sufficiently long period to avoid a clock skew issue between the client clock and the database clock. This 10-second period is the exact period in which you can restore data by using the PITR feature. +5. Unpause ScalarDB or ScalarDL pods by using `scalar-admin`. For details on how to unpause the pods, see the [Details on using `scalar-admin`](BackupNoSQL.mdx#details-on-using-scalar-admin) section in this guide. +6. Check the `unpause started` time. You must check the `unpause started` time to confirm the exact period in which you can restore data by using the PITR feature. +7. Check the pod status after performing the backup. You must check the following four points by using the `kubectl get pod` command after the backup operation is completed. + * **The number of ScalarDB or ScalarDL pods.** Confirm this number matches the number of pods that you wrote down before performing the backup. + * **The ScalarDB or ScalarDL pod names in the `NAME` column.** Confirm the names match the pod names that you wrote down before performing the backup. + * **The ScalarDB or ScalarDL pod status is `Running` in the `STATUS` column.** + * **The restart count of each pod in the `RESTARTS` column.** Confirm the counts match the restart counts that you wrote down before performing the backup + + **If any of the two values are different, you must retry the backup operation from the beginning.** The reason for the different values may be caused by some pods being added or restarted while performing the backup. In such case, those pods will run in the `unpause` state. Pods in the `unpause` state will cause the backup data to be transactionally inconsistent. +8. **(Amazon DynamoDB only)** If you use the PITR feature of DynamoDB, you will need to perform additional steps to create a backup because the feature restores data with another name table by using PITR. For details on the additional steps after creating the exact period in which you can restore the data, please see [Restore databases in a Kubernetes environment](RestoreDatabase.mdx#amazon-dynamodb). + +## Back up multiple databases + +If you have two or more databases that the [Multi-storage Transactions](https://scalardb.scalar-labs.com/docs/latest/multi-storage-transactions/) or [Two-phase Commit Transactions](https://scalardb.scalar-labs.com/docs/latest/two-phase-commit-transactions/) feature uses, you must pause all instances of ScalarDB or ScalarDL and create the same period where no ongoing transactions exist in the databases. + +To ensure consistency between multiple databases, you must restore the databases to the same point in time by using the PITR feature. + +## Details on using `scalar-admin` + +### Check the Kubernetes resource name + +You must specify the SRV service URL to the `-s (--srv-service-url)` flag. In Kubernetes environments, the format of the SRV service URL is `_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster.local`. + +If you use Scalar Helm Charts to deploy ScalarDB or ScalarDL, the `my-svc` and `my-namespace` may vary depending on your environment. You must specify the headless service name as `my-svc` and the namespace as `my-namespace`. + +* Example + * ScalarDB Server + ```console + _scalardb._tcp.-headless..svc.cluster.local + ``` + * ScalarDL Ledger + ```console + _scalardl-admin._tcp.-headless..svc.cluster.local + ``` + * ScalarDL Auditor + ```console + _scalardl-auditor-admin._tcp.-headless..svc.cluster.local + ``` + +The helm release name decides the headless service name `-headless`. You can see the helm release name by running the following command: + +```console +helm list -n ns-scalar +``` + +You should see the following output: + +```console +NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION +scalardb ns-scalar 1 2023-02-09 19:31:40.527130674 +0900 JST deployed scalardb-2.5.0 3.8.0 +scalardl-auditor ns-scalar 1 2023-02-09 19:32:03.008986045 +0900 JST deployed scalardl-audit-2.5.1 3.7.1 +scalardl-ledger ns-scalar 1 2023-02-09 19:31:53.459548418 +0900 JST deployed scalardl-4.5.1 3.7.1 +``` + +You can also see the headless service name `-headless` by running the `kubectl get service` command. + +```console +kubectl get service -n ns-scalar +``` + +You should see the following output: + +```console +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +scalardb-envoy LoadBalancer 10.99.245.143 60051:31110/TCP 2m2s +scalardb-envoy-metrics ClusterIP 10.104.56.87 9001/TCP 2m2s +scalardb-headless ClusterIP None 60051/TCP 2m2s +scalardb-metrics ClusterIP 10.111.213.194 8080/TCP 2m2s +scalardl-auditor-envoy LoadBalancer 10.111.141.43 40051:31553/TCP,40052:31171/TCP 99s +scalardl-auditor-envoy-metrics ClusterIP 10.104.245.188 9001/TCP 99s +scalardl-auditor-headless ClusterIP None 40051/TCP,40053/TCP,40052/TCP 99s +scalardl-auditor-metrics ClusterIP 10.105.119.158 8080/TCP 99s +scalardl-ledger-envoy LoadBalancer 10.96.239.167 50051:32714/TCP,50052:30857/TCP 109s +scalardl-ledger-envoy-metrics ClusterIP 10.97.204.18 9001/TCP 109s +scalardl-ledger-headless ClusterIP None 50051/TCP,50053/TCP,50052/TCP 109s +scalardl-ledger-metrics ClusterIP 10.104.216.189 8080/TCP 109s +``` + +### Pause + +You can send a pause request to ScalarDB or ScalarDL pods in a Kubernetes environment. + +* Example + * ScalarDB Server + ```console + kubectl run scalar-admin-pause --image=ghcr.io/scalar-labs/scalar-admin: --restart=Never -it -- -c pause -s _scalardb._tcp.-headless..svc.cluster.local + ``` + * ScalarDL Ledger + ```console + kubectl run scalar-admin-pause --image=ghcr.io/scalar-labs/scalar-admin: --restart=Never -it -- -c pause -s _scalardl-admin._tcp.-headless..svc.cluster.local + ``` + * ScalarDL Auditor + ```console + kubectl run scalar-admin-pause --image=ghcr.io/scalar-labs/scalar-admin: --restart=Never -it -- -c pause -s _scalardl-auditor-admin._tcp.-headless..svc.cluster.local + ``` + +### Unpause + +You can send an unpause request to ScalarDB or ScalarDL pods in a Kubernetes environment. + +* Example + * ScalarDB Server + ```console + kubectl run scalar-admin-unpause --image=ghcr.io/scalar-labs/scalar-admin: --restart=Never -it -- -c unpause -s _scalardb._tcp.-headless..svc.cluster.local + ``` + * ScalarDL Ledger + ```console + kubectl run scalar-admin-unpause --image=ghcr.io/scalar-labs/scalar-admin: --restart=Never -it -- -c unpause -s _scalardl-admin._tcp.-headless..svc.cluster.local + ``` + * ScalarDL Auditor + ```console + kubectl run scalar-admin-unpause --image=ghcr.io/scalar-labs/scalar-admin: --restart=Never -it -- -c unpause -s _scalardl-auditor-admin._tcp.-headless..svc.cluster.local + ``` + +### Check the `pause completed` time and `unpause started` time + +The `scalar-admin` pods output the `pause completed` time and `unpause started` time to stdout. You can also see those times by running the `kubectl logs` command. + +```console +kubectl logs scalar-admin-pause +``` +```console +kubectl logs scalar-admin-unpause +``` diff --git a/versioned_docs/version-3.9/scalar-kubernetes/BackupRDB.mdx b/versioned_docs/version-3.9/scalar-kubernetes/BackupRDB.mdx new file mode 100644 index 00000000..804d6360 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/BackupRDB.mdx @@ -0,0 +1,21 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Back up an RDB in a Kubernetes environment + +This guide explains how to create a backup of a single relational database (RDB) that ScalarDB or ScalarDL uses in a Kubernetes environment. Please note that this guide assumes that you are using a managed database from a cloud services provider. + +If you have two or more RDBs that the [Multi-storage Transactions](https://scalardb.scalar-labs.com/docs/latest/multi-storage-transactions/) or [Two-phase Commit Transactions](https://scalardb.scalar-labs.com/docs/latest/two-phase-commit-transactions/) feature uses, you must follow the instructions in [Back up a NoSQL database in a Kubernetes environment](BackupNoSQL.mdx) instead. + +## Perform a backup + +To perform backups, you should enable the automated backup feature available in the managed databases. By enabling this feature, you do not need to perform any additional backup operations. For details on the backup configurations in each managed database, see the following guides: + +* [Set up a database for ScalarDB/ScalarDL deployment on AWS](SetupDatabaseForAWS.mdx) +* [Set up a database for ScalarDB/ScalarDL deployment on Azure](SetupDatabaseForAzure.mdx) + +Because the managed RDB keeps backup data consistent from a transactions perspective, you can restore backup data to any point in time by using the point-in-time recovery (PITR) feature in the managed RDB. diff --git a/versioned_docs/version-3.9/scalar-kubernetes/BackupRestoreGuide.mdx b/versioned_docs/version-3.9/scalar-kubernetes/BackupRestoreGuide.mdx new file mode 100644 index 00000000..8faabbd4 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/BackupRestoreGuide.mdx @@ -0,0 +1,48 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Back up and restore ScalarDB or ScalarDL data in a Kubernetes environment + +This guide explains how to backup and restore ScalarDB or ScalarDL data in a Kubernetes environment. Please note that this guide assumes that you are using a managed database from a cloud services provider as the backend database for ScalarDB or ScalarDL. The following is a list of the managed databases that this guide assumes you might be using: + +* NoSQL: does not support transactions + * Amazon DynamoDB + * Azure Cosmos DB for NoSQL +* Relational database (RDB): supports transactions + * Amazon RDS + * MySQL + * Oracle + * PostgreSQL + * SQL Server + * Amazon Aurora + * MySQL + * PostgreSQL + * Azure Database + * MySQL + * PostgreSQL + +For details on how to back up and restore databases used with ScalarDB in a transactionally consistent way, see [A Guide on How to Backup and Restore Databases Used Through ScalarDB](https://scalardb.scalar-labs.com/docs/latest/backup-restore/). + +## Perform a backup + +### Confirm the type of database and number of databases you are using + +How you perform backup and restore depends on the type of database (NoSQL or RDB) and the number of databases you are using. + +#### NoSQL or multiple databases + +If you are using a NoSQL database, or if you have two or more databases that the [Multi-storage Transactions](https://scalardb.scalar-labs.com/docs/latest/multi-storage-transactions/) or [Two-phase Commit Transactions](https://scalardb.scalar-labs.com/docs/latest/two-phase-commit-transactions/) feature uses, please see [Back up a NoSQL database in a Kubernetes environment](BackupNoSQL.mdx) for details on how to perform a backup. + +#### Single RDB + +If you are using a single RDB, please see [Back up an RDB in a Kubernetes environment](BackupRDB.mdx) for details on how to perform a backup. + +If you have two or more RDBs that the [Multi-storage Transactions](https://scalardb.scalar-labs.com/docs/latest/multi-storage-transactions/) or [Two-phase Commit Transactions](https://scalardb.scalar-labs.com/docs/latest/two-phase-commit-transactions/) feature uses, you must follow the instructions in [Back up a NoSQL database in a Kubernetes environment](BackupNoSQL.mdx) instead. + +## Restore a database + +For details on how to restore data from a managed database, please see [Restore databases in a Kubernetes environment](RestoreDatabase.mdx). diff --git a/versioned_docs/version-3.9/scalar-kubernetes/CreateAKSClusterForScalarDB.mdx b/versioned_docs/version-3.9/scalar-kubernetes/CreateAKSClusterForScalarDB.mdx new file mode 100644 index 00000000..2ea91d54 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/CreateAKSClusterForScalarDB.mdx @@ -0,0 +1,104 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium + - Deprecated +displayed_sidebar: docsEnglish +--- + +# Guidelines for creating an AKS cluster for ScalarDB Server + +This document explains the requirements and recommendations for creating an Azure Kubernetes Service (AKS) cluster for ScalarDB Server deployment. For details on how to deploy ScalarDB Server on an AKS cluster, see [Deploy ScalarDB Server on AKS](ManualDeploymentGuideScalarDBServerOnAKS.mdx). + +## Before you begin + +You must create an AKS cluster based on the following requirements, recommendations, and your project's requirements. For specific details about how to create an AKS cluster, refer to the following official Microsoft documentation based on the tool you use in your environment: + +* [Azure CLI](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-cli) +* [PowerShell](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-powershell) +* [Azure portal](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-portal) + +## Requirements + +When deploying ScalarDB Server, you must: + +* Create the AKS cluster by using a [supported Kubernetes version](https://scalardb.scalar-labs.com/docs/latest/requirements/#kubernetes). +* Configure the AKS cluster based on the version of Kubernetes and your project's requirements. + +## Recommendations (optional) + +The following are some recommendations for deploying ScalarDB Server. These recommendations are not required, so you can choose whether or not to apply these recommendations based on your needs. + +### Create at least three worker nodes and three pods + +To ensure that the AKS cluster has high availability, you should use at least three worker nodes and deploy at least three pods spread across the worker nodes. You can see the [sample configurations](https://github.com/scalar-labs/scalar-kubernetes/blob/master/conf/scalardb-custom-values.yaml) of `podAntiAffinity` for making three pods spread across the worker nodes. + +:::note + +If you place the worker nodes in different [availability zones](https://learn.microsoft.com/en-us/azure/availability-zones/az-overview) (AZs), you can withstand an AZ failure. + +::: + +### Use 4vCPU / 8GB memory nodes for the worker node in the ScalarDB Server node pool + +From the perspective of commercial licenses, resources for one pod running ScalarDB Server are limited to 2vCPU / 4GB memory. In addition to the ScalarDB Server pod, Kubernetes could deploy some of the following components to each worker node: + +* ScalarDB Server pod (2vCPU / 4GB) +* Envoy proxy +* Your application pods (if you choose to run your application's pods on the same worker node) +* Monitoring components (if you deploy monitoring components such as `kube-prometheus-stack`) +* Kubernetes components + +With this in mind, you should use a worker node that has at least 4vCPU / 8GB memory resources and use at least three worker nodes for availability, as mentioned in [Create at least three worker nodes and three pods](#create-at-least-three-worker-nodes-and-three-pods). + +However, three nodes with at least 4vCPU / 8GB memory resources per node is the minimum for production environment. You should also consider the resources of the AKS cluster (for example, the number of worker nodes, vCPUs per node, memory per node, ScalarDB Server pods, and pods for your application), which depend on your system's workload. In addition, if you plan to scale the pods automatically by using some features like [Horizontal Pod Autoscaling (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/), you should consider the maximum number of pods on the worker node when deciding the worker node resources. + +### Create a node pool for ScalarDB Server pods + +AKS creates one system node pool named **agentpool** that is preferred for system pods (used to keep AKS running) by default. We recommend creating another node pool with **user** mode for ScalarDB Server pods and deploying ScalarDB Server pods on this additional node pool. + +### Configure cluster autoscaler in AKS + +If you want to scale ScalarDB Server pods automatically by using [Horizontal Pod Autoscaler](https://learn.microsoft.com/en-us/azure/aks/concepts-scale#horizontal-pod-autoscaler), you should configure cluster autoscaler in AKS too. For details, refer to the official Microsoft documentation at [Cluster autoscaler](https://learn.microsoft.com/en-us/azure/aks/concepts-scale#cluster-autoscaler). + +In addition, if you configure cluster autoscaler, you should create a subnet in a virtual network (VNet) for AKS to ensure a sufficient number of IPs exist so that AKS can work without network issues after scaling. The required number of IPs varies depending on the networking plug-in. For more details about the number of IPs required, refer to the following: + +* [Use kubenet networking with your own IP address ranges in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/configure-kubenet) +* [Configure Azure CNI networking in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni) + +### Create the AKS cluster on a private network + +You should create the AKS cluster on a private network (private subnet in a VNet) since ScalarDB Server does not provide any services to users directly via internet access. We recommend accessing ScalarDB Server via a private network from your applications. + +### Create the AKS cluster by using Azure CNI, if necessary + +The AKS default networking plug-in is [kubenet](https://learn.microsoft.com/en-us/azure/aks/configure-kubenet). If your requirement does not match kubenet, you should use [Azure Container Networking Interface (CNI)](https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni). + +For example, if you want to deploy multiple ScalarDB Server environments on one AKS cluster (e.g., deploy a multi-tenant ScalarDB Server) and you want to control the connection between each tenant by using [Kubernetes NetworkPolicies](https://kubernetes.io/docs/concepts/services-networking/network-policies/), kubenet supports only the Calico Network Policy, which the [Azure support team does not support](https://learn.microsoft.com/en-us/azure/aks/use-network-policies#differences-between-azure-network-policy-manager-and-calico-network-policy-and-their-capabilities). Please note that the Calico Network Policy is supported only by the Calico community or through additional paid support. + +The Azure support and engineering teams, however, do support Azure CNI. So, if you want to use Kubernetes NetworkPolicies and receive support from the Azure support team, you should use Azure CNI. For more details about the differences between kubenet and Azure CNI, refer to the following official Microsoft documentation: + +* [Network concepts for applications in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/concepts-network) +* [Use kubenet networking with your own IP address ranges in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/configure-kubenet) +* [Configure Azure CNI networking in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni) + +### Restrict connections by using some security features based on your requirements + +You should restrict unused connections in ScalarDB Server. To restrict unused connections, you can use some security features in Azure, like [network security groups](https://learn.microsoft.com/en-us/azure/virtual-network/network-security-groups-overview). + +The connections (ports) that ScalarDB Server uses by default are as follows: + +* ScalarDB Server + * 60051/TCP (accepts requests from a client) + * 8080/TCP (accepts monitoring requests) +* Scalar Envoy (used with ScalarDB Server) + * 60051/TCP (load balancing for ScalarDB Server) + * 9001/TCP (accepts monitoring requests for Scalar Envoy itself) + +:::note + +- If you change the default listening port for ScalarDB Server in the configuration file (`database.properties`), you must allow connections by using the port that you configured. +- You must also allow the connections that AKS uses itself. For more details about AKS traffic requirements, refer to [Control egress traffic using Azure Firewall in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/limit-egress-traffic). + +::: + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/CreateAKSClusterForScalarDL.mdx b/versioned_docs/version-3.9/scalar-kubernetes/CreateAKSClusterForScalarDL.mdx new file mode 100644 index 00000000..c22cc225 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/CreateAKSClusterForScalarDL.mdx @@ -0,0 +1,107 @@ +--- +displayed_sidebar: docsEnglish +--- + +# Guidelines for creating an AKS cluster for ScalarDL Ledger + +This document explains the requirements and recommendations for creating an Azure Kubernetes Service (AKS) cluster for ScalarDL Ledger deployment. For details on how to deploy ScalarDL Ledger on an AKS cluster, see [Deploy ScalarDL Ledger on AKS](ManualDeploymentGuideScalarDLOnAKS.mdx). + +## Before you begin + +You must create an AKS cluster based on the following requirements, recommendations, and your project's requirements. For specific details about how to create an AKS cluster, refer to the following official Microsoft documentation based on the tool you use in your environment: + +* [Azure CLI](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-cli) +* [PowerShell](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-powershell) +* [Azure portal](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-portal) + +## Requirements + +When deploying ScalarDL Ledger, you must: + +* Create the AKS cluster by using a [supported Kubernetes version](https://scalardb.scalar-labs.com/docs/latest/requirements/#kubernetes). +* Configure the AKS cluster based on the version of Kubernetes and your project's requirements. + +:::warning + +For Byzantine fault detection in ScalarDL to work properly, do not deploy your application pods on the same AKS cluster as the ScalarDL Ledger deployment. + +::: + +## Recommendations (optional) + +The following are some recommendations for deploying ScalarDL Ledger. These recommendations are not required, so you can choose whether or not to apply these recommendations based on your needs. + +### Create at least three worker nodes and three pods + +To ensure that the AKS cluster has high availability, you should use at least three worker nodes and deploy at least three pods spread across the worker nodes. You can see the [sample configurations](https://github.com/scalar-labs/scalar-kubernetes/blob/master/conf/scalardl-custom-values.yaml) of `podAntiAffinity` for making three pods spread across the worker nodes. + +:::note + +If you place the worker nodes in different [availability zones](https://learn.microsoft.com/en-us/azure/availability-zones/az-overview) (AZs), you can withstand an AZ failure. + +::: + +### Use 4vCPU / 8GB memory nodes for the worker node in the ScalarDL Ledger node pool + +From the perspective of commercial licenses, resources for one pod running ScalarDL Ledger are limited to 2vCPU / 4GB memory. In addition to the ScalarDL Ledger pod, Kubernetes could deploy some of the following components to each worker node: + +* ScalarDL Ledger pod (2vCPU / 4GB) +* Envoy proxy +* Monitoring components (if you deploy monitoring components such as `kube-prometheus-stack`) +* Kubernetes components + +With this in mind, you should use a worker node that has at least 4vCPU / 8GB memory resources and use at least three worker nodes for availability, as mentioned in [Create at least three worker nodes and three pods](#create-at-least-three-worker-nodes-and-three-pods). + +However, three nodes with at least 4vCPU / 8GB memory resources per node is the minimum environment for production. You should also consider the resources of the AKS cluster (for example, the number of worker nodes, vCPUs per node, memory per node, and ScalarDL Ledger pods), which depend on your system's workload. In addition, if you plan to scale the pods automatically by using some features like [Horizontal Pod Autoscaling (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/), you should consider the maximum number of pods on the worker node when deciding the worker node resources. + +### Create a node pool for ScalarDL Ledger pods + +AKS creates one system node pool named **agentpool** that is preferred for system pods (used to keep AKS running) by default. We recommend creating another node pool with **user** mode for ScalarDL Ledger pods and deploying ScalarDL Ledger pods on this additional node pool. + +### Configure cluster autoscaler in AKS + +If you want to scale ScalarDL Ledger pods automatically by using [Horizontal Pod Autoscaler](https://learn.microsoft.com/en-us/azure/aks/concepts-scale#horizontal-pod-autoscaler), you should configure cluster autoscaler in AKS too. For details, refer to the official Microsoft documentation at [Cluster autoscaler](https://learn.microsoft.com/en-us/azure/aks/concepts-scale#cluster-autoscaler). + +In addition, if you configure cluster autoscaler, you should create a subnet in a virtual network (VNet) for AKS to ensure a sufficient number of IPs exist so that AKS can work without network issues after scaling. The required number of IPs varies depending on the networking plug-in. For more details about the number of IPs required, refer to the following: + +* [Use kubenet networking with your own IP address ranges in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/configure-kubenet) +* [Configure Azure CNI networking in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni) + +### Create the AKS cluster on a private network + +You should create the AKS cluster on a private network (private subnet in a VNet) since ScalarDL Ledger does not provide any services to users directly via internet access. We recommend accessing ScalarDL Ledger via a private network from your applications. + +### Create the AKS cluster by using Azure CNI, if necessary + +The AKS default networking plug-in is [kubenet](https://learn.microsoft.com/en-us/azure/aks/configure-kubenet). If your requirement does not match kubenet, you should use [Azure Container Networking Interface (CNI)](https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni). + +For example, if you want to deploy multiple ScalarDL Ledger environments on one AKS cluster (e.g., deploy multi-tenant ScalarDL Ledger) and you want to control the connection between each tenant by using [Kubernetes NetworkPolicies](https://kubernetes.io/docs/concepts/services-networking/network-policies/), kubenet supports only the Calico Network Policy, which the [Azure support team does not support](https://learn.microsoft.com/en-us/azure/aks/use-network-policies#differences-between-azure-network-policy-manager-and-calico-network-policy-and-their-capabilities). Please note that the Calico Network Policy is supported only by the Calico community or through additional paid support. + +The Azure support and engineering teams, however, do support Azure CNI. So, if you want to use Kubernetes NetworkPolicies and receive support from the Azure support team, you should use Azure CNI. For more details about the differences between kubenet and Azure CNI, refer to the following official Microsoft documentation: + +* [Network concepts for applications in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/concepts-network) +* [Use kubenet networking with your own IP address ranges in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/configure-kubenet) +* [Configure Azure CNI networking in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni) + +### Restrict connections by using some security features based on your requirements + +You should restrict unused connections in ScalarDL Ledger. To restrict unused connections, you can use some security features in Azure, like [network security groups](https://learn.microsoft.com/en-us/azure/virtual-network/network-security-groups-overview). + +The connections (ports) that ScalarDL Ledger uses by default are as follows: + +* ScalarDL Ledger + * 50051/TCP (accepts requests from a client) + * 50052/TCP (accepts privileged requests from a client) + * 50053/TCP (accepts pause and unpause requests from a scalar-admin client tool) + * 8080/TCP (accepts monitoring requests) +* Scalar Envoy (used with ScalarDL Ledger) + * 50051/TCP (load balancing for ScalarDL Ledger) + * 50052/TCP (load balancing for ScalarDL Ledger) + * 9001/TCP (accepts monitoring requests for Scalar Envoy itself) + +:::note + +- If you change the default listening port for ScalarDL Ledger in the configuration file (`ledger.properties`), you must allow connections by using the port that you configured. +- You must also allow the connections that AKS uses itself. For more details about AKS traffic requirements, refer to [Control egress traffic using Azure Firewall in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/limit-egress-traffic). + +::: diff --git a/versioned_docs/version-3.9/scalar-kubernetes/CreateAKSClusterForScalarDLAuditor.mdx b/versioned_docs/version-3.9/scalar-kubernetes/CreateAKSClusterForScalarDLAuditor.mdx new file mode 100644 index 00000000..ce62bd0f --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/CreateAKSClusterForScalarDLAuditor.mdx @@ -0,0 +1,126 @@ +--- +displayed_sidebar: docsEnglish +--- + +# Guidelines for creating an AKS cluster for ScalarDL Ledger and ScalarDL Auditor + +This document explains the requirements and recommendations for creating an Azure Kubernetes Service (AKS) cluster for ScalarDL Ledger and ScalarDL Auditor deployment. For details on how to deploy ScalarDL Ledger and ScalarDL Auditor on an AKS cluster, see [Deploy ScalarDL Ledger and ScalarDL Auditor on AKS](ManualDeploymentGuideScalarDLAuditorOnAKS.mdx). + +## Before you begin + +You must create an AKS cluster based on the following requirements, recommendations, and your project's requirements. For specific details about how to create an AKS cluster, refer to the following official Microsoft documentation based on the tool you use in your environment: + +* [Azure CLI](https://learn.microsoft.com/ja-jp/azure/aks/learn/quick-kubernetes-deploy-cli) +* [PowerShell](https://learn.microsoft.com/ja-jp/azure/aks/learn/quick-kubernetes-deploy-powershell) +* [Azure portal](https://learn.microsoft.com/ja-jp/azure/aks/learn/quick-kubernetes-deploy-portal) + +## Requirements + +When deploying ScalarDL Ledger and ScalarDL Auditor, you must: + +* Create two AKS clusters by using a [supported Kubernetes version](https://scalardb.scalar-labs.com/docs/latest/requirements/#kubernetes). + * One AKS cluster for ScalarDL Ledger + * One AKS cluster for ScalarDL Auditor +* Configure the AKS clusters based on the version of Kubernetes and your project's requirements. +* Configure a virtual network (VNet) as follows. + * Connect the **VNet of AKS (for Ledger)** and the **VNet of AKS (for Auditor)** by using [virtual network peering](https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-manage-peering). To do so, you must specify the different IP ranges for the **VNet of AKS (for Ledger)** and the **VNet of AKS (for Auditor)** when you create those VNets. + * Allow **connections between Ledger and Auditor** to make ScalarDL (Auditor mode) work properly. + * For more details about these network requirements, refer to [Configure Network Peering for ScalarDL Auditor Mode](NetworkPeeringForScalarDLAuditor.mdx). + +:::warning + +For Byzantine fault detection in ScalarDL to work properly, do not deploy your application pods on the same AKS clusters as the ScalarDL Ledger and ScalarDL Auditor deployments. + +::: + +## Recommendations (optional) + +The following are some recommendations for deploying ScalarDL Ledger and ScalarDL Auditor. These recommendations are not required, so you can choose whether or not to apply these recommendations based on your needs. + +### Create at least three worker nodes and three pods per AKS cluster + +To ensure that the AKS cluster has high availability, you should use at least three worker nodes and deploy at least three pods spread across the worker nodes. You can see the [ScalarDL Ledger sample configurations](https://github.com/scalar-labs/scalar-kubernetes/blob/master/conf/scalardl-custom-values.yaml) and [ScalarDL Auditor sample configurations](https://github.com/scalar-labs/scalar-kubernetes/blob/master/conf/scalardl-audit-custom-values.yaml) of `podAntiAffinity` for making three pods spread across the worker nodes. + +:::note + +If you place the worker nodes in different [availability zones](https://learn.microsoft.com/en-us/azure/availability-zones/az-overview) (AZs), you can withstand an AZ failure. + +::: + +### Use 4vCPU / 8GB memory nodes for the worker node in the ScalarDL Ledger and ScalarDL Auditor node pool + +From the perspective of commercial licenses, resources for each pod running ScalarDL Ledger or ScalarDL Auditor are limited to 2vCPU / 4GB memory. In addition to the ScalarDL Ledger and ScalarDL Auditor pods, Kubernetes could deploy some of the following components to each worker node: + +* AKS cluster for ScalarDL Ledger + * ScalarDL Ledger pod (2vCPU / 4GB) + * Envoy proxy + * Monitoring components (if you deploy monitoring components such as `kube-prometheus-stack`) + * Kubernetes components +* AKS cluster for ScalarDL Auditor + * ScalarDL Auditor pod (2vCPU / 4GB) + * Envoy proxy + * Monitoring components (if you deploy monitoring components such as `kube-prometheus-stack`) + * Kubernetes components + +With this in mind, you should use a worker node that has at least 4vCPU / 8GB memory resources and use at least three worker nodes for availability, as mentioned in [Create at least three worker nodes and three pods](#create-at-least-three-worker-nodes-and-three-pods-per-aks-cluster). And remember, for Byzantine fault detection to work properly, you cannot deploy your application pods on the same AKS clusters as the ScalarDL Ledger and ScalarDL Auditor deployments. + +However, three nodes with at least 4vCPU / 8GB memory resources per node is the minimum environment for production. You should also consider the resources of the AKS cluster (for example, the number of worker nodes, vCPUs per node, memory per node, ScalarDL Ledger pods, and ScalarDL Auditor pods), which depend on your system's workload. In addition, if you plan to scale the pods automatically by using some features like [Horizontal Pod Autoscaling (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/), you should consider the maximum number of pods on the worker node when deciding the worker node resources. + +### Create node pools for ScalarDL Ledger and ScalarDL Auditor pods + +AKS creates one system node pool named **agentpool** that is preferred for system pods (used to keep AKS running) by default. We recommend creating additional node pools with **user** mode for ScalarDL Ledger and ScalarDL Auditor pods and deploying ScalarDL Ledger and ScalarDL Auditor pods on those additional node pools. + +### Configure cluster autoscaler in AKS + +If you want to scale ScalarDL Ledger and ScalarDL Auditor pods automatically by using [Horizontal Pod Autoscaler)](https://learn.microsoft.com/en-us/azure/aks/concepts-scale#horizontal-pod-autoscaler), you should configure cluster autoscaler in AKS too. For details, refer to the official Microsoft documentation at [Cluster autoscaler](https://learn.microsoft.com/en-us/azure/aks/concepts-scale#cluster-autoscaler). + +In addition, if you configure cluster autoscaler, you should create a subnet in a VNet for AKS to ensure a sufficient number of IPs exist so that AKS can work without network issues after scaling. The required number of IPs varies depending on the networking plug-in. For more details about the number of IPs required, refer to the following: + +* [Use kubenet networking with your own IP address ranges in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/configure-kubenet) +* [Configure Azure CNI networking in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni) + +### Create the AKS cluster on a private network + +You should create the AKS cluster on a private network (private subnet in a VNet) since ScalarDL Ledger and ScalarDL Auditor do not provide any services to users directly via internet access. We recommend accessing ScalarDL Ledger and ScalarDL Auditor via a private network from your applications. + +### Create the AKS cluster by using Azure CNI, if necessary + +The AKS default networking plug-in is [kubenet](https://learn.microsoft.com/en-us/azure/aks/configure-kubenet). If your requirement does not match kubenet, you should use [Azure Container Networking Interface (CNI)](https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni). + +For example, if you want to deploy multiple ScalarDL Ledger and ScalarDL Auditor environments on only one AKS cluster instead of two AKS clusters (e.g., deploy multi-tenant ScalarDL) and control the connection between each tenant by using [Kubernetes NetworkPolicies](https://kubernetes.io/docs/concepts/services-networking/network-policies/), kubenet supports only the Calico Network Policy, which the [Azure support team does not support](https://learn.microsoft.com/en-us/azure/aks/use-network-policies#differences-between-azure-network-policy-manager-and-calico-network-policy-and-their-capabilities). Please note that the Calico Network Policy is supported only by the Calico community or through additional paid support. + +The Azure support and engineering teams, however, do support Azure CNI. So, if you want to use Kubernetes NetworkPolicies and receive support from the Azure support team, you should use Azure CNI. For more details about the differences between kubenet and Azure CNI, refer to the following official Microsoft documentation: + +* [Network concepts for applications in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/concepts-network) +* [Use kubenet networking with your own IP address ranges in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/configure-kubenet) +* [Configure Azure CNI networking in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni) + +### Restrict connections by using some security features based on your requirements + +You should restrict unused connections in ScalarDL and ScalarDL Auditor. To restrict unused connections, you can use some security features of Azure, like [network security groups](https://learn.microsoft.com/en-us/azure/virtual-network/network-security-groups-overview). + +The connections (ports) that ScalarDL Ledger and ScalarDL Auditor use by default are as follows: + +* ScalarDL Ledger + * 50051/TCP (accepts requests from a client and ScalarDL Auditor) + * 50052/TCP (accepts privileged requests from a client and ScalarDL Auditor) + * 50053/TCP (accepts pause/unpause requests from a scalar-admin client tool) + * 8080/TCP (accepts monitoring requests) +* ScalarDL Auditor + * 40051/TCP (accepts requests from a client) + * 40052/TCP (accepts privileged requests from a client) + * 40053/TCP (accepts pause and unpause requests from a scalar-admin client tool) + * 8080/TCP (accepts monitoring requests) +* Scalar Envoy (used with ScalarDL Ledger and ScalarDL Auditor) + * 50051/TCP (load balancing for ScalarDL Ledger) + * 50052/TCP (load balancing for ScalarDL Ledger) + * 40051/TCP (load balancing for ScalarDL Auditor) + * 40052/TCP (load balancing for ScalarDL Auditor) + * 9001/TCP (accepts monitoring requests for Scalar Envoy itself) + +:::note + +- If you change the default listening port for ScalarDL Ledger and ScalarDL Auditor in their configuration files (`ledger.properties` and `auditor.properties`, respectively), you must allow connections by using the port that you configured. +- You must also allow the connections that AKS uses itself. For more details about AKS traffic requirements, refer to [Control egress traffic using Azure Firewall in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/limit-egress-traffic). + +::: diff --git a/versioned_docs/version-3.9/scalar-kubernetes/CreateAKSClusterForScalarProducts.mdx b/versioned_docs/version-3.9/scalar-kubernetes/CreateAKSClusterForScalarProducts.mdx new file mode 100644 index 00000000..0ecd0034 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/CreateAKSClusterForScalarProducts.mdx @@ -0,0 +1,20 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Guidelines for creating an AKS cluster for Scalar products + +To create an Azure Kubernetes Service (AKS) cluster for Scalar products, refer to the following: + +* [Guidelines for creating an AKS cluster for ScalarDB Server](CreateAKSClusterForScalarDB.mdx) +* [Guidelines for creating an AKS cluster for ScalarDL Ledger](CreateAKSClusterForScalarDL.mdx) +* [Guidelines for creating an AKS cluster for ScalarDL Ledger and ScalarDL Auditor](CreateAKSClusterForScalarDLAuditor.mdx) + +To deploy Scalar products on AKS, refer to the following: + +* [Deploy ScalarDB Server on AKS](ManualDeploymentGuideScalarDBServerOnAKS.mdx) +* [Deploy ScalarDL Ledger on AKS](ManualDeploymentGuideScalarDLOnAKS.mdx) +* [Deploy ScalarDL Ledger and ScalarDL Auditor on AKS](ManualDeploymentGuideScalarDLAuditorOnAKS.mdx) diff --git a/versioned_docs/version-3.9/scalar-kubernetes/CreateBastionServer.mdx b/versioned_docs/version-3.9/scalar-kubernetes/CreateBastionServer.mdx new file mode 100644 index 00000000..331c147f --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/CreateBastionServer.mdx @@ -0,0 +1,47 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Create a bastion server + +This document explains how to create a bastion server and install some tools for the deployment of Scalar products. + +## Create a server on the same private network as a Kubernetes cluster + +It is recommended to create a Kubernetes cluster for Scalar products on a private network. If you create a Kubernetes cluster on a private network, you should create a bastion server on the same private network to access your Kubernetes cluster. + +## Install tools + +Please install the following tools on the bastion server according to their official documents. + +* [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) +* [helm](https://helm.sh/docs/intro/install/) + +## Configure kubeconfig + +After you install the kubectl command, you must configure a **kubeconfig** to access your Kubernetes cluster. Please refer to the following official document for more details on how to configure kubeconfig in each managed Kubernetes. + +If you use Amazon EKS (Amazon Elastic Kubernetes Service), you must install the **AWS CLI** according to the official document [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). After that, you can see how to configure kubeconfig in [Creating or updating a kubeconfig file for an Amazon EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html). + +If you use AKS (Azure Kubernetes Service), you must install the **Azure CLI** according to the official document [How to install the Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli). After that, you can see how to configure kubeconfig in [az aks get-credentials](https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-get-credentials). + +## Check installation + +You can check if the tools are installed as follows. + +* kubectl + ```console + kubectl version --client + ``` +* helm + ```console + helm version + ``` + +You can also check if your kubeconfig is properly configured as follows. If you see a URL response, kubectl is correctly configured to access your cluster. +```console +kubectl cluster-info +``` diff --git a/versioned_docs/version-3.9/scalar-kubernetes/CreateEKSClusterForScalarDB.mdx b/versioned_docs/version-3.9/scalar-kubernetes/CreateEKSClusterForScalarDB.mdx new file mode 100644 index 00000000..56d8c918 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/CreateEKSClusterForScalarDB.mdx @@ -0,0 +1,86 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium + - Deprecated +displayed_sidebar: docsEnglish +--- + +# (Deprecated) Guidelines for creating an EKS cluster for ScalarDB Server + +:::warning + +ScalarDB Server is now deprecated. Please use [ScalarDB Cluster](ManualDeploymentGuideScalarDBClusterOnEKS.mdx) instead. + +::: + +This document explains the requirements and recommendations for creating an Amazon Elastic Kubernetes Service (EKS) cluster for ScalarDB Server deployment. For details on how to deploy ScalarDB Server on an EKS cluster, see [Deploy ScalarDB Server on Amazon EKS](ManualDeploymentGuideScalarDBServerOnEKS.mdx). + +## Before you begin + +You must create an EKS cluster based on the following requirements, recommendations, and your project's requirements. For specific details about how to create an EKS cluster, see the official Amazon documentation at [Creating an Amazon EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html). + +## Requirements + +When deploying ScalarDB Server, you must: + +* Create the EKS cluster by using a [supported Kubernetes version](https://scalardb.scalar-labs.com/docs/latest/requirements/#kubernetes). +* Configure the EKS cluster based on the version of Kubernetes and your project's requirements. + +## Recommendations (optional) + +The following are some recommendations for deploying ScalarDB Server. These recommendations are not required, so you can choose whether or not to apply these recommendations based on your needs. + +### Create at least three worker nodes and three pods + +To ensure that the EKS cluster has high availability, you should use at least three worker nodes and deploy at least three pods spread across the worker nodes. You can see the [sample configurations](https://github.com/scalar-labs/scalar-kubernetes/blob/master/conf/scalardb-custom-values.yaml) of `podAntiAffinity` for making three pods spread across the worker nodes. + +:::note + +If you place the worker nodes in different [availability zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) (AZs), you can withstand an AZ failure. + +::: + +### Use 4vCPU / 8GB memory nodes for the worker node in the ScalarDB Server node group + +From the perspective of commercial licenses, resources for one pod running ScalarDB Server are limited to 2vCPU / 4GB memory. In addition to the ScalarDB Server pod, Kubernetes could deploy some of the following components to each worker node: + +* ScalarDB Server pod (2vCPU / 4GB) +* Envoy proxy +* Your application pods (if you choose to run your application's pods on the same worker node) +* Monitoring components (if you deploy monitoring components such as `kube-prometheus-stack`) +* Kubernetes components + +With this in mind, you should use a worker node that has at least 4vCPU / 8GB memory resources and use at least three worker nodes for availability, as mentioned in [Create at least three worker nodes and three pods](#create-at-least-three-worker-nodes-and-three-pods). + +However, three nodes with at least 4vCPU / 8GB memory resources per node is the minimum for production environment. You should also consider the resources of the EKS cluster (for example, the number of worker nodes, vCPUs per node, memory per node, ScalarDB Server pods, and pods for your application), which depend on your system's workload. In addition, if you plan to scale the pods automatically by using some features like [Horizontal Pod Autoscaling (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/), you should consider the maximum number of pods on the worker node when deciding the worker node resources. + +### Configure Cluster Autoscaler in EKS + +If you want to scale ScalarDB Server pods automatically by using [Horizontal Pod Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/horizontal-pod-autoscaler.html), you should configure Cluster Autoscaler in EKS too. For details, see the official Amazon documentation at [Autoscaling](https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html#cluster-autoscaler). + +In addition, if you configure Cluster Autoscaler, you should create a subnet in an Amazon Virtual Private Cloud (VPC) for EKS with the prefix (e.g., `/24`) to ensure a sufficient number of IPs exist so that EKS can work without network issues after scaling. + +### Create the EKS cluster on a private network + +You should create the EKS cluster on a private network (private subnet in a VPC) since ScalarDB Server does not provide any services to users directly via internet access. We recommend accessing ScalarDB Server via a private network from your applications. + +### Restrict connections by using some security features based on your requirements + +You should restrict unused connections in ScalarDB Server. To restrict unused connections, you can use some security features in AWS, like [security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) and [network access control lists](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html). + +The connections (ports) that ScalarDB Server uses by default are as follows: + +* ScalarDB Server + * 60051/TCP (accepts requests from a client) + * 8080/TCP (accepts monitoring requests) +* Scalar Envoy (used with ScalarDB Server) + * 60051/TCP (load balancing for ScalarDB Server) + * 9001/TCP (accepts monitoring requests for Scalar Envoy itself) + +:::note + +- If you change the default listening port for ScalarDB Server in the configuration file (`database.properties`), you must allow connections by using the port that you configured. +- You must also allow the connections that EKS uses itself. For more details about Amazon EKS security group requirements, refer to [Amazon EKS security group requirements and considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html). + +::: diff --git a/versioned_docs/version-3.9/scalar-kubernetes/CreateEKSClusterForScalarDBCluster.mdx b/versioned_docs/version-3.9/scalar-kubernetes/CreateEKSClusterForScalarDBCluster.mdx new file mode 100644 index 00000000..73ffe005 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/CreateEKSClusterForScalarDBCluster.mdx @@ -0,0 +1,86 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Guidelines for creating an EKS cluster for ScalarDB Cluster + +This document explains the requirements and recommendations for creating an Amazon Elastic Kubernetes Service (EKS) cluster for ScalarDB Cluster deployment. For details on how to deploy ScalarDB Cluster on an EKS cluster, see [Deploy ScalarDB Cluster on Amazon EKS](ManualDeploymentGuideScalarDBClusterOnEKS.mdx). + +## Before you begin + +You must create an EKS cluster based on the following requirements, recommendations, and your project's requirements. For specific details about how to create an EKS cluster, see the official Amazon documentation at [Creating an Amazon EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html). + +## Requirements + +When deploying ScalarDB Cluster, you must: + +* Create the EKS cluster by using a [supported Kubernetes version](https://scalardb.scalar-labs.com/docs/latest/requirements/#kubernetes). +* Configure the EKS cluster based on the version of Kubernetes and your project's requirements. + +## Recommendations (optional) + +The following are some recommendations for deploying ScalarDB Cluster. These recommendations are not required, so you can choose whether or not to apply these recommendations based on your needs. + +### Create at least three worker nodes and three pods + +To ensure that the EKS cluster has high availability, you should use at least three worker nodes and deploy at least three pods spread across the worker nodes. You can see the [sample configurations](https://github.com/scalar-labs/scalar-kubernetes/blob/master/conf/scalardb-cluster-custom-values-indirect-mode.yaml) of `podAntiAffinity` for making three pods spread across the worker nodes. + +:::note + +If you place the worker nodes in different [availability zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) (AZs), you can withstand an AZ failure. + +::: + +### Use 4vCPU / 8GB memory nodes for the worker node in the ScalarDB Cluster node group + +From the perspective of commercial licenses, resources for one pod running ScalarDB Cluster are limited to 2vCPU / 4GB memory. In addition to the ScalarDB Cluster pod, Kubernetes could deploy some of the following components to each worker node: + +* ScalarDB Cluster pod (2vCPU / 4GB) +* Envoy proxy (if you use `indirect` client mode or use a programming language other than Java) +* Your application pods (if you choose to run your application's pods on the same worker node) +* Monitoring components (if you deploy monitoring components such as `kube-prometheus-stack`) +* Kubernetes components + +:::note + +You do not need to deploy an Envoy pod when using `direct-kubernetes` mode. + +::: + +With this in mind, you should use a worker node that has at least 4vCPU / 8GB memory resources and use at least three worker nodes for availability, as mentioned in [Create at least three worker nodes and three pods](#create-at-least-three-worker-nodes-and-three-pods). + +However, three nodes with at least 4vCPU / 8GB memory resources per node is the minimum for production environment. You should also consider the resources of the EKS cluster (for example, the number of worker nodes, vCPUs per node, memory per node, ScalarDB Cluster pods, and pods for your application), which depend on your system's workload. In addition, if you plan to scale the pods automatically by using some features like [Horizontal Pod Autoscaling (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/), you should consider the maximum number of pods on the worker node when deciding the worker node resources. + +### Configure Cluster Autoscaler in EKS + +If you want to scale ScalarDB Cluster pods automatically by using [Horizontal Pod Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/horizontal-pod-autoscaler.html), you should configure Cluster Autoscaler in EKS too. For details, see the official Amazon documentation at [Autoscaling](https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html#cluster-autoscaler). + +In addition, if you configure Cluster Autoscaler, you should create a subnet in an Amazon Virtual Private Cloud (VPC) for EKS with the prefix (e.g., `/24`) to ensure a sufficient number of IPs exist so that EKS can work without network issues after scaling. + +### Create the EKS cluster on a private network + +You should create the EKS cluster on a private network (private subnet in a VPC) since ScalarDB Cluster does not provide any services to users directly via internet access. We recommend accessing ScalarDB Cluster via a private network from your applications. + +### Restrict connections by using some security features based on your requirements + +You should restrict unused connections in ScalarDB Cluster. To restrict unused connections, you can use some security features in AWS, like [security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) and [network access control lists](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html). + +The connections (ports) that ScalarDB Cluster uses by default are as follows: + +* ScalarDB Cluster + * 60053/TCP (accepts gRPC or SQL requests from a client) + * 8080/TCP (accepts GraphQL requests from a client) + * 9080/TCP (accepts monitoring requests) +* Scalar Envoy (used with ScalarDB Cluster `indirect` mode) + * 60053/TCP (load balancing for ScalarDB Cluster) + * 9001/TCP (accepts monitoring requests for Scalar Envoy itself) + +:::note + +- If you change the default listening port for ScalarDB Cluster in the configuration file (`scalardb-cluster-node.properties`), you must allow connections by using the port that you configured. +- You must also allow the connections that EKS uses itself. For more details about Amazon EKS security group requirements, refer to [Amazon EKS security group requirements and considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html). + +::: diff --git a/versioned_docs/version-3.9/scalar-kubernetes/CreateEKSClusterForScalarDL.mdx b/versioned_docs/version-3.9/scalar-kubernetes/CreateEKSClusterForScalarDL.mdx new file mode 100644 index 00000000..fa83233d --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/CreateEKSClusterForScalarDL.mdx @@ -0,0 +1,84 @@ +--- +displayed_sidebar: docsEnglish +--- + +# Guidelines for creating an EKS cluster for ScalarDL Ledger + +This document explains the requirements and recommendations for creating an Amazon Elastic Kubernetes Service (EKS) cluster for ScalarDL Ledger deployment. For details on how to deploy ScalarDL Ledger on an EKS cluster, see [Deploy ScalarDL Ledger on Amazon EKS](ManualDeploymentGuideScalarDLOnEKS.mdx). + +## Before you begin + +You must create an EKS cluster based on the following requirements, recommendations, and your project's requirements. For specific details about how to create an EKS cluster, see the official Amazon documentation at [Creating an Amazon EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html). + +## Requirements + +When deploying ScalarDL Ledger, you must: + +* Create the EKS cluster by using a [supported Kubernetes version](https://scalardb.scalar-labs.com/docs/latest/requirements/#kubernetes). +* Configure the EKS cluster based on the version of Kubernetes and your project's requirements. + +:::warning + +For Byzantine fault detection in ScalarDL to work properly, do not deploy your application pods on the same EKS cluster as the ScalarDL Ledger deployment. + +::: + +## Recommendations (optional) + +The following are some recommendations for deploying ScalarDL Ledger. These recommendations are not required, so you can choose whether or not to apply these recommendations based on your needs. + +### Create at least three worker nodes and three pods + +To ensure that the EKS cluster has high availability, you should use at least three worker nodes and deploy at least three pods spread across the worker nodes. You can see the [sample configurations](https://github.com/scalar-labs/scalar-kubernetes/blob/master/conf/scalardl-custom-values.yaml) of `podAntiAffinity` for making three pods spread across the worker nodes. + +:::note + +If you place the worker nodes in different [availability zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) (AZs), you can withstand an AZ failure. + +::: + +### Use 4vCPU / 8GB memory nodes for the worker node in the ScalarDL Ledger node group + +From the perspective of commercial licenses, resources for one pod running ScalarDL Ledger are limited to 2vCPU / 4GB memory. In addition to the ScalarDL Ledger pod, Kubernetes could deploy some of the following components to each worker node: + +* ScalarDL Ledger pod (2vCPU / 4GB) +* Envoy proxy +* Monitoring components (if you deploy monitoring components such as `kube-prometheus-stack`) +* Kubernetes components + +With this in mind, you should use a worker node that has at least 4vCPU / 8GB memory resources and use at least three worker nodes for availability, as mentioned in [Create at least three worker nodes and three pods](#create-at-least-three-worker-nodes-and-three-pods). + +However, three nodes with at least 4vCPU / 8GB memory resources per node is the minimum environment for production. You should also consider the resources of the EKS cluster (for example, the number of worker nodes, vCPUs per node, memory per node, and ScalarDL Ledger pods), which depend on your system's workload. In addition, if you plan to scale the pods automatically by using some features like [Horizontal Pod Autoscaling (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/), you should consider the maximum number of pods on the worker node when deciding the worker node resources. + +### Configure Cluster Autoscaler in EKS + +If you want to scale ScalarDL Ledger pods automatically by using [Horizontal Pod Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/horizontal-pod-autoscaler.html), you should configure Cluster Autoscaler in EKS too. For details, see the official Amazon documentation at [Autoscaling](https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html#cluster-autoscaler). + +In addition, if you configure Cluster Autoscaler, you should create a subnet in an Amazon Virtual Private Cloud (VPC) for EKS with the prefix (e.g., `/24`) to ensure a sufficient number of IPs exist so that EKS can work without network issues after scaling. + +### Create the EKS cluster on a private network + +You should create the EKS cluster on a private network (private subnet in a VPC) since ScalarDL Ledger does not provide any services to users directly via internet access. We recommend accessing ScalarDL Ledger via a private network from your applications. + +### Restrict connections by using some security features based on your requirements + +You should restrict unused connections in ScalarDL Ledger. To restrict unused connections, you can use some security features in AWS, like [security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) and [network access control lists](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html). + +The connections (ports) that ScalarDL Ledger uses by default are as follows: + +* ScalarDL Ledger + * 50051/TCP (Accept the requests from a client) + * 50052/TCP (accepts privileged requests from a client) + * 50053/TCP (accepts pause and unpause requests from a scalar-admin client tool) + * 8080/TCP (accepts monitoring requests) +* Scalar Envoy (used with ScalarDL Ledger) + * 50051/TCP (load balancing for ScalarDL Ledger) + * 50052/TCP (load balancing for ScalarDL Ledger) + * 9001/TCP (accepts monitoring requests for Scalar Envoy itself) + +:::note + +- If you change the default listening port for ScalarDL Ledger in the configuration file (`ledger.properties`), you must allow connections by using the port that you configured. +- You must also allow the connections that EKS uses itself. For more details about Amazon EKS security group requirements, refer to [Amazon EKS security group requirements and considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html). + +::: diff --git a/versioned_docs/version-3.9/scalar-kubernetes/CreateEKSClusterForScalarDLAuditor.mdx b/versioned_docs/version-3.9/scalar-kubernetes/CreateEKSClusterForScalarDLAuditor.mdx new file mode 100644 index 00000000..f1752a04 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/CreateEKSClusterForScalarDLAuditor.mdx @@ -0,0 +1,103 @@ +--- +displayed_sidebar: docsEnglish +--- + +# Guidelines for creating an EKS cluster for ScalarDL Ledger and ScalarDL Auditor + +This document explains the requirements and recommendations for creating an Amazon Elastic Kubernetes Service (EKS) cluster for ScalarDL Ledger and ScalarDL Auditor deployment. For details on how to deploy ScalarDL Ledger and ScalarDL Auditor on an EKS cluster, see [Deploy ScalarDL Ledger and ScalarDL Auditor on Amazon EKS](ManualDeploymentGuideScalarDLAuditorOnEKS.mdx). + +## Before you begin + +You must create an EKS cluster based on the following requirements, recommendations, and your project's requirements. For specific details about how to create an EKS cluster, see the official Amazon documentation at [Creating an Amazon EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html). + +## Requirements + +When deploying ScalarDL Ledger and ScalarDL Auditor, you must: + +* Create two EKS clusters by using a [supported Kubernetes version](https://scalardb.scalar-labs.com/docs/latest/requirements/#kubernetes). + * One EKS cluster for ScalarDL Ledger + * One EKS cluster for ScalarDL Auditor +* Configure the EKS clusters based on the version of Kubernetes and your project's requirements. +* Configure an Amazon Virtual Private Cloud (VPC) as follows. + * Connect the **VPC of EKS (for Ledger)** and the **VPC of EKS (for Auditor)** by using [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html). To do so, you must specify the different IP ranges for the **VPC of EKS (for Ledger)** and the **VPC of EKS (for Auditor)** when you create those VPCs. + * Allow **connections between Ledger and Auditor** to make ScalarDL (Auditor mode) work properly. + * For more details about these network requirements, refer to [Configure Network Peering for ScalarDL Auditor Mode](NetworkPeeringForScalarDLAuditor.mdx). + +:::warning + +For Byzantine fault detection in ScalarDL to work properly, do not deploy your application pods on the same EKS clusters as the ScalarDL Ledger and ScalarDL Auditor deployments. + +::: + +## Recommendations (optional) + +The following are some recommendations for deploying ScalarDL Ledger and ScalarDL Auditor. These recommendations are not required, so you can choose whether or not to apply these recommendations based on your needs. + +### Create at least three worker nodes and three pods per EKS cluster + +To ensure that the EKS cluster has high availability, you should use at least three worker nodes and deploy at least three pods spread across the worker nodes. You can see the [ScalarDL Ledger sample configurations](https://github.com/scalar-labs/scalar-kubernetes/blob/master/conf/scalardl-custom-values.yaml) and [ScalarDL Auditor sample configurations](https://github.com/scalar-labs/scalar-kubernetes/blob/master/conf/scalardl-audit-custom-values.yaml) of `podAntiAffinity` for making three pods spread across the worker nodes. + +:::note + +If you place the worker nodes in different [availability zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) (AZs), you can withstand an AZ failure. + +::: + +### Use 4vCPU / 8GB memory nodes for the worker node in the ScalarDL Ledger and ScalarDL Auditor node group + +From the perspective of commercial licenses, resources for each pod running ScalarDL Ledger or ScalarDL Auditor are limited to 2vCPU / 4GB memory. In addition to the ScalarDL Ledger and ScalarDL Auditor pods, Kubernetes could deploy some of the following components to each worker node: + +* EKS cluster for ScalarDL Ledger + * ScalarDL Ledger pod (2vCPU / 4GB) + * Envoy proxy + * Monitoring components (if you deploy monitoring components such as `kube-prometheus-stack`) + * Kubernetes components +* EKS cluster for ScalarDL Auditor + * ScalarDL Auditor pod (2vCPU / 4GB) + * Envoy proxy + * Monitoring components (if you deploy monitoring components such as `kube-prometheus-stack`) + * Kubernetes components + +With this in mind, you should use a worker node that has at least 4vCPU / 8GB memory resources and use at least three worker nodes for availability, as mentioned in [Create at least three worker nodes and three pods](#create-at-least-three-worker-nodes-and-three-pods-per-eks-cluster). And remember, for Byzantine fault detection to work properly, you cannot deploy your application pods on the same EKS clusters as the ScalarDL Ledger and ScalarDL Auditor deployments. + +However, three nodes with at least 4vCPU / 8GB memory resources per node is a minimum environment for production. You should also consider the resources of the EKS cluster (for example, the number of worker nodes, vCPUs per node, memory per node, ScalarDL Ledger pods, and ScalarDL Auditor pods), which depend on your system's workload. In addition, if you plan to scale the pods automatically by using some features like [Horizontal Pod Autoscaling (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/), you should consider the maximum number of pods on the worker node when deciding the worker node resources. + +### Configure Cluster Autoscaler in EKS + +If you want to scale ScalarDL Ledger or ScalarDL Auditor pods automatically by using [Horizontal Pod Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/horizontal-pod-autoscaler.html), you should configure Cluster Autoscaler in EKS too. For details, see the official Amazon documentation at [Autoscaling](https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html#cluster-autoscaler). + +In addition, if you configure Cluster Autoscaler, you should create a subnet in a VPC for EKS with the prefix (e.g., `/24`) to ensure a sufficient number of IPs exist so that EKS can work without network issues after scaling. + +### Create the EKS cluster on a private network + +You should create the EKS cluster on a private network (private subnet in a VPC) since ScalarDL Ledger and ScalarDL Auditor do not provide any services to users directly via internet access. We recommend accessing ScalarDL Ledger and ScalarDL Auditor via a private network from your applications. + +### Restrict connections by using some security features based on your requirements + +You should restrict unused connections in ScalarDL Ledger and ScalarDL Auditor. To restrict unused connections, you can use some security features in AWS, like [security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) and [network access control lists](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html). + +The connections (ports) that ScalarDL Ledger and ScalarDL Auditor use by default are as follows: + +* ScalarDL Ledger + * 50051/TCP (accepts requests from a client and ScalarDL Auditor) + * 50052/TCP (accepts privileged requests from a client and ScalarDL Auditor) + * 50053/TCP (accepts pause and unpause requests from a scalar-admin client tool) + * 8080/TCP (accepts monitoring requests) +* ScalarDL Auditor + * 40051/TCP (accepts requests from a client) + * 40052/TCP (accepts privileged requests from a client) + * 40053/TCP (accepts pause and unpause requests from a scalar-admin client tool) + * 8080/TCP (accepts monitoring requests) +* Scalar Envoy (used with ScalarDL Ledger and ScalarDL Auditor) + * 50051/TCP (load balancing for ScalarDL Ledger) + * 50052/TCP (load balancing for ScalarDL Ledger) + * 40051/TCP (load balancing for ScalarDL Auditor) + * 40052/TCP (load balancing for ScalarDL Auditor) + * 9001/TCP (accepts monitoring requests for Scalar Envoy itself) + +:::note + +- If you change the default listening port for ScalarDL Ledger and ScalarDL Auditor in their configuration files (`ledger.properties` and `auditor.properties`, respectively), you must allow the connections by using the port that you configured. +- You must also allow the connections that EKS uses itself. For more details about Amazon EKS security group requirements, refer to [Amazon EKS security group requirements and considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html). + +::: diff --git a/versioned_docs/version-3.9/scalar-kubernetes/CreateEKSClusterForScalarProducts.mdx b/versioned_docs/version-3.9/scalar-kubernetes/CreateEKSClusterForScalarProducts.mdx new file mode 100644 index 00000000..278ee172 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/CreateEKSClusterForScalarProducts.mdx @@ -0,0 +1,21 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Guidelines for creating an Amazon EKS cluster for Scalar products + +To create an Amazon Elastic Kubernetes Service (EKS) cluster for Scalar products, refer to the following: + +* [Guidelines for creating an EKS cluster for ScalarDB Cluster](CreateEKSClusterForScalarDBCluster.mdx) +* [(Deprecated) Guidelines for creating an EKS cluster for ScalarDB Server](CreateEKSClusterForScalarDB.mdx) +* [Guidelines for creating an EKS cluster for ScalarDL Ledger](CreateEKSClusterForScalarDL.mdx) +* [Guidelines for creating an EKS cluster for ScalarDL Ledger and ScalarDL Auditor](CreateEKSClusterForScalarDLAuditor.mdx) + +To deploy Scalar products on Amazon EKS, refer to the following: + +* [Deploy ScalarDB Server on Amazon EKS (Amazon Elastic Kubernetes Service)](ManualDeploymentGuideScalarDBServerOnEKS.mdx) +* [Deploy ScalarDL Ledger on Amazon EKS (Amazon Elastic Kubernetes Service)](ManualDeploymentGuideScalarDLOnEKS.mdx) +* [Deploy ScalarDL Ledger and ScalarDL Auditor on Amazon EKS (Amazon Elastic Kubernetes Service)](ManualDeploymentGuideScalarDLAuditorOnEKS.mdx) diff --git a/versioned_docs/version-3.9/scalar-kubernetes/HowToCreateKeyAndCertificateFiles.mdx b/versioned_docs/version-3.9/scalar-kubernetes/HowToCreateKeyAndCertificateFiles.mdx new file mode 100644 index 00000000..76ad97a1 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/HowToCreateKeyAndCertificateFiles.mdx @@ -0,0 +1,147 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# How to Create Private Key and Certificate Files for TLS Connections in Scalar Products + +This guide explains how to create private key and certificate files for TLS connections in ScalarDB Cluster and ScalarDL. When you enable the TLS feature, you must prepare private key and certificate files. + +## Certificate requirements + +* You can use only `RSA` or `ECDSA` as an algorithm for private key and certificate files. + +## Example steps to create sample private key and certificate files + +In this example, you'll create sample private key and certificate files by using `cfssl` and `cfssljson`. If you don't have those tools installed, please install `cfssl` and `cfssljson` to run this example. + +:::note + +* You can use other tools, like `openssl`, to create the private key and certificate files. Alternatively, you can ask a third-party CA or the administrator of your private CA to create the private key and certificate for your production environment. +* This example creates a self-signed certificate. However, it is strongly recommended that these certificates **not** be used in production. Please ask trusted issuers (a public CA or your private CA) to create certificate files for your production environment based on your security requirements. + +::: + +1. Create a working directory. + + ```console + mkdir -p ${HOME}/scalar/example/certs/ + ``` + +1. Change the working directory to `${HOME}/scalar/example/certs/`. + + ```console + cd ${HOME}/scalar/example/certs/ + ``` + +1. Create a JSON file that includes CA information. + + ```console + cat << 'EOF' > ${HOME}/scalar/example/certs/ca.json + { + "CN": "scalar-example-ca", + "key": { + "algo": "ecdsa", + "size": 256 + }, + "names": [ + { + "C": "JP", + "ST": "Tokyo", + "L": "Shinjuku", + "O": "Scalar Example CA" + } + ] + } + EOF + ``` + +1. Create the CA private key and certificate files. + + ```console + cfssl gencert -initca ca.json | cfssljson -bare ca + ``` + +1. Create a JSON file that includes CA configurations. + + ```console + cat << 'EOF' > ${HOME}/scalar/example/certs/ca-config.json + { + "signing": { + "default": { + "expiry": "87600h" + }, + "profiles": { + "scalar-example-ca": { + "expiry": "87600h", + "usages": [ + "signing", + "key encipherment", + "server auth" + ] + } + } + } + } + EOF + ``` + +1. Create a JSON file that includes server information. + + ```console + cat << 'EOF' > ${HOME}/scalar/example/certs/server.json + { + "CN": "scalar-example-server", + "hosts": [ + "server.scalar.example.com", + "localhost" + ], + "key": { + "algo": "ecdsa", + "size": 256 + }, + "names": [ + { + "C": "JP", + "ST": "Tokyo", + "L": "Shinjuku", + "O": "Scalar Example Server" + } + ] + } + EOF + ``` + +1. Create the private key and certificate files for the server. + + ```console + cfssl gencert -ca ca.pem -ca-key ca-key.pem -config ca-config.json -profile scalar-example-ca server.json | cfssljson -bare server + ``` + +1. Confirm that the private key and certificate files were created. + + ```console + ls -1 + ``` + + [Command execution result] + + ```console + ca-config.json + ca-key.pem + ca.csr + ca.json + ca.pem + server-key.pem + server.csr + server.json + server.pem + ``` + + In this case: + + * `server-key.pem` is the private key file. + * `server.pem` is the certificate file. + * `ca.pem` is the root CA certificate file. diff --git a/versioned_docs/version-3.9/scalar-kubernetes/HowToGetContainerImages.mdx b/versioned_docs/version-3.9/scalar-kubernetes/HowToGetContainerImages.mdx new file mode 100644 index 00000000..10b77dbc --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/HowToGetContainerImages.mdx @@ -0,0 +1,25 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# How to get the container images of Scalar products + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +You can get the container images of Scalar products in several ways. Please choose one of the following methods. + + + + You can get the container images from the public container repository if you have a commercial license. For more details on how to use container images, see [How to use the container images](./HowToUseContainerImages.mdx). + + + For details on how to get Scalar products from AWS Marketplace, see [How to install Scalar products through AWS Marketplace](./AwsMarketplaceGuide.mdx). + + + Scalar products are currently not available in Azure Marketplace. Please get the container images from one of the other methods. + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/HowToScaleScalarDB.mdx b/versioned_docs/version-3.9/scalar-kubernetes/HowToScaleScalarDB.mdx new file mode 100644 index 00000000..76d6e2a5 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/HowToScaleScalarDB.mdx @@ -0,0 +1,46 @@ +--- +tags: + - Community + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# How to Scale ScalarDB + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +This guide explains how to scale ScalarDB. The contents of this guide assume that you used [Scalar Helm Chart](https://github.com/scalar-labs/helm-charts) to deploy ScalarDB Cluster, which is the recommended way. + +:::note + +You might be able to resolve some performance issues by scaling ScalarDB Cluster if a bottleneck exists on the ScalarDB Cluster side. However, sometimes a performance issue is caused by a bottleneck in the backend databases. In such cases, scaling ScalarDB Cluster will not resolve the performance issue. + +Instead, please check where the bottleneck exists. If the bottleneck exists in the backend databases, consider scaling the backend databases. + +::: + + + + + 1. Add the following to your custom values file, replacing `` with the number of pods you want to scale: + + ```yaml + scalardbCluster: + replicaCount: + ``` + + 1. Upgrade your ScalarDB Cluster deployment by running the following `helm upgrade` command, which uses the updated custom values file. Be sure to replace the contents in the angle brackets as described: + + ```console + helm upgrade scalar-labs/scalardb-cluster -n -f / --version + ``` + + + + + ScalarDB Core is provided as a Java library. So, when you scale your application, ScalarDB scales with your application. + + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/HowToScaleScalarDL.mdx b/versioned_docs/version-3.9/scalar-kubernetes/HowToScaleScalarDL.mdx new file mode 100644 index 00000000..d1adfbe3 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/HowToScaleScalarDL.mdx @@ -0,0 +1,53 @@ +--- +displayed_sidebar: docsEnglish +--- + +# How to Scale ScalarDL + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +This guide explains how to scale ScalarDL. The contents of this guide assume that you used [Scalar Helm Chart](https://github.com/scalar-labs/helm-charts) to deploy ScalarDL, which is the recommended way. + +:::note + +You might be able to resolve some performance issues by scaling ScalarDL if a bottleneck exists on the ScalarDL side. However, sometimes a performance issue is caused by a bottleneck in the backend databases. In such cases, scaling ScalarDL will not resolve the performance issue. + +Instead, please check where the bottleneck exists. If the bottleneck exists in the backend databases, consider scaling the backend databases. + +::: + + + + + 1. Add the following to your custom values file, replacing `` with the number of pods you want to scale: + + ```yaml + ledger: + replicaCount: + ``` + + 1. Upgrade your ScalarDL Ledger deployment by running the following `helm upgrade` command, which uses the updated custom values file. Be sure to replace the contents in the angle brackets as described: + + ```console + helm upgrade scalar-labs/scalardl -n -f / --version + ``` + + + + + 1. Add the following to your custom values file, replacing `` with the number of pods you want to scale: + + ```yaml + auditor: + replicaCount: + ``` + + 1. Upgrade your ScalarDL Auditor deployment by running the following `helm upgrade` command, which uses the updated custom values file. Be sure to replace the contents in the angle brackets as described: + + ```console + helm upgrade scalar-labs/scalardl-audit -n -f / --version + ``` + + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/HowToUpgradeScalarDB.mdx b/versioned_docs/version-3.9/scalar-kubernetes/HowToUpgradeScalarDB.mdx new file mode 100644 index 00000000..98ce9714 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/HowToUpgradeScalarDB.mdx @@ -0,0 +1,86 @@ +--- +tags: + - Community + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# How to Upgrade ScalarDB + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +This guide explains how to upgrade to a newer version of ScalarDB. + +## Before you begin + +Before you upgrade to a new version, please check the [ScalarDB Cluster Compatibility Matrix](https://scalardb.scalar-labs.com/docs/latest/scalardb-cluster/compatibility/) to ensure compatibility between ScalarDB Cluster and the client SDKs. + +## Upgrade versions + +To learn about upgrading your version of ScalarDB, select the type of upgrade you want to do. + + + + Major versions do **not** keep backward compatibility. So, you might need to do special operations when you upgrade from one major version to another major version. For example: + + - Update the database schema on the backend database side. + - Update the API in your application. + + For details on what you need when you upgrade to a major version, please refer to the release notes for the major version that you want to upgrade to. + + + Minor versions keep backward compatibility. So, you can upgrade ScalarDB from one minor version to another minor version in the same major version without doing any special operations. For example, you don't need to update the database schema on the backend database side or update the API in your application. + + + + If you use [Scalar Helm Chart](https://github.com/scalar-labs/helm-charts) to deploy ScalarDB Cluster, you can upgrade your ScalarDB Cluster deployment as follows: + + 1. Set the ScalarDB Cluster Helm Chart version as an environment variable. You can do this by running the following command to put the chart version into the environment variable `SCALAR_DB_CLUSTER_CHART_VERSION`: + + ```console + SCALAR_DB_CLUSTER_CHART_VERSION=1.5.0 + ``` + + :::tip + + You can search for the chart version that corresponds to the ScalarDB Cluster version, run the following command: + + ```console + helm search repo scalar-labs/scalardb-cluster -l + ``` + + The following command might be helpful, but please make sure to replace the contents in the angle brackets with your version of ScalarDB Cluster: + + ```console + SCALAR_DB_CLUSTER_VERSION=..; SCALAR_DB_CLUSTER_CHART_VERSION=$(helm search repo scalar-labs/scalardb-cluster -l | grep -F "${SCALAR_DB_CLUSTER_VERSION}" | awk '{print $2}' | sort --version-sort -r | head -n 1) + ``` + + ::: + + 1. Upgrade your ScalarDB Cluster deployment by replacing the contents in the angle brackets as described: + + ```console + helm upgrade scalar-labs/scalardb-cluster -n -f / --version ${SCALAR_DB_CLUSTER_CHART_VERSION} + ``` + + After you upgrade the ScalarDB Cluster deployment, you should consider upgrading the version of the [ScalarDB Cluster Java Client SDK](https://mvnrepository.com/artifact/com.scalar-labs/scalardb-cluster-java-client-sdk) or the [ScalarDB Cluster .NET Client SDK](https://www.nuget.org/packages/ScalarDB.Net.Client) on your application side. + + + ScalarDB Core is provided as a Java library. So, you can update the dependencies of your Java project and rebuild your application to upgrade ScalarDB versions. + + + + + Patch versions keep backward compatibility. So, you can upgrade ScalarDB from one patch version to another patch version in the same major version and minor version without doing any special operations. For example, you don't need to update the database schema on the backend database side or update the API in your application. + + The method for upgrading to a patch version is the same as for upgrading to a minor version. For details on how to upgrade, see the [Upgrade to a minor version](?versions=upgrade-minor-version) tab. + + + +:::warning + +ScalarDB does **not** support downgrading to a previous version (major, minor, or patch). You can only upgrade to a newer version. + +::: diff --git a/versioned_docs/version-3.9/scalar-kubernetes/HowToUpgradeScalarDL.mdx b/versioned_docs/version-3.9/scalar-kubernetes/HowToUpgradeScalarDL.mdx new file mode 100644 index 00000000..1d98c52e --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/HowToUpgradeScalarDL.mdx @@ -0,0 +1,112 @@ +--- +displayed_sidebar: docsEnglish +--- + +# How to Upgrade ScalarDL + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +This guide explains how to upgrade to a newer version of ScalarDL. + +## Before you begin + +Before you upgrade to a new version, please check the [ScalarDL Compatibility Matrix](https://scalardl.scalar-labs.com/docs/latest/compatibility/) to ensure compatibility between ScalarDL and the client SDKs. + +## Upgrade versions + +To learn about upgrading your version of ScalarDL, select the type of upgrade you want to do. + + + + Major versions do **not** keep backward compatibility. So, you might need to do special operations when you upgrade from one major version to another major version. For example: + + - Update the database schema on the backend database side. + - Update the API in your application. + + For details on what you need when you upgrade to a major version, please refer to the release notes for the major version that you want to upgrade to. + + + Minor versions keep backward compatibility. So, you can upgrade ScalarDL from one minor version to another minor version in the same major version without doing any special operations. For example, you don't need to update the database schema on the backend database side or update the API in your application. + + + + If you use [Scalar Helm Chart](https://github.com/scalar-labs/helm-charts) to deploy ScalarDL Ledger, you can upgrade your ScalarDL Ledger deployment as follows: + + 1. Set the ScalarDL Ledger Helm Chart version as an environment variable. You can do this by running the following command to put the chart version into the environment variable `SCALAR_DL_LEDGER_CHART_VERSION`: + + ```console + SCALAR_DL_LEDGER_CHART_VERSION=4.8.0 + ``` + + :::tip + + You can search for the chart version that corresponds to the ScalarDL Ledger version as follows: + + ```console + helm search repo scalar-labs/scalardl -l + ``` + + The following command might be helpful, but please make sure to replace the contents in the angle brackets with your version of ScalarDL Ledger: + + ```console + SCALAR_DL_VERSION=..; SCALAR_DL_LEDGER_CHART_VERSION=$(helm search repo scalar-labs/scalardl -l | grep -v -e "scalar-labs/scalardl-audit" | grep -F "${SCALAR_DL_VERSION}" | awk '{print $2}' | sort --version-sort -r | head -n 1) + ``` + + ::: + + 1. Upgrade your ScalarDL Ledger deployment by replacing the contents in the angle brackets as described: + + ```console + helm upgrade scalar-labs/scalardl -n -f / --version ${SCALAR_DL_LEDGER_CHART_VERSION} + ``` + + After you upgrade the ScalarDL Ledger deployment (and the ScalarDL Auditor deployment if you use Auditor mode), you should consider upgrading the version of the [ScalarDL Java Client SDK](https://mvnrepository.com/artifact/com.scalar-labs/scalardl-java-client-sdk) on your application side. + + + If you use [Scalar Helm Chart](https://github.com/scalar-labs/helm-charts) to deploy ScalarDL Auditor, you can upgrade your ScalarDL Auditor deployment as follows: + + 1. Set the ScalarDL Auditor Helm Chart version as an environment variable. You can do this by running the following command to put the chart version into the environment variable `SCALAR_DL_AUDITOR_CHART_VERSION`: + + ```console + SCALAR_DL_AUDITOR_CHART_VERSION=2.8.0 + ``` + + :::tip + + You can search for the chart version that corresponds to the ScalarDL Auditor version as follows: + + ```console + helm search repo scalar-labs/scalardl-audit -l + ``` + + The following command might be helpful, but please make sure to replace the contents in the angle brackets with your version of ScalarDL Auditor: + + ```console + SCALAR_DL_VERSION=..; SCALAR_DL_AUDITOR_CHART_VERSION=$(helm search repo scalar-labs/scalardl-audit -l | grep -F "${SCALAR_DL_VERSION}" | awk '{print $2}' | sort --version-sort -r | head -n 1) + ``` + + ::: + + 1. Upgrade your ScalarDL Auditor deployment by replacing the contents in the angle brackets as described: + + ```console + helm upgrade scalar-labs/scalardl-audit -n -f / --version ${SCALAR_DL_AUDITOR_CHART_VERSION} + ``` + + After you upgrade the ScalarDL Auditor deployment and the ScalarDL Ledger deployment, you should consider upgrading the version of the [ScalarDL Java Client SDK](https://mvnrepository.com/artifact/com.scalar-labs/scalardl-java-client-sdk) on your application side. + + + + + Patch versions keep backward compatibility. So, you can upgrade ScalarDL from one patch version to another patch version in the same major version and minor version without doing any special operations. For example, you don't need to update the database schema on the backend database side or update the API in your application. + + The method for upgrading to a patch version is the same as for upgrading to a minor version. For details on how to upgrade, see the [Upgrade to a minor version](?versions=upgrade-minor-version) tab. + + + +:::warning + +ScalarDL does **not** support downgrading to a previous version (major, minor, or patch). You can only upgrade to a newer version. + +::: diff --git a/versioned_docs/version-3.9/scalar-kubernetes/HowToUseContainerImages.mdx b/versioned_docs/version-3.9/scalar-kubernetes/HowToUseContainerImages.mdx new file mode 100644 index 00000000..95b93618 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/HowToUseContainerImages.mdx @@ -0,0 +1,137 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# How to use the container images + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +You can pull the container images from the public container repository. You must configure the license key and the certificate in your `.properties` file if you use the container images. + +## Prerequisites + +The public container images are available for the following products and versions: + +* ScalarDB Cluster v3.12 or later +* ScalarDL v3.9 or later + +## Pull the container images from the public container repository + +You can pull the container image of each product from the public container repository. To pull a container image, select your Scalar product to see the link to the container image. + + + + + Select your edition of ScalarDB Enterprise. + + + https://github.com/orgs/scalar-labs/packages/container/package/scalardb-cluster-node-byol-standard + + + https://github.com/orgs/scalar-labs/packages/container/package/scalardb-cluster-node-byol-premium + + + + + https://github.com/orgs/scalar-labs/packages/container/package/scalardl-ledger-byol + + + https://github.com/orgs/scalar-labs/packages/container/package/scalardl-auditor-byol + + + +If you're using Scalar Helm Charts, you must set `*.image.repository` in the custom values file for the product that you're using. Select your Scalar product to see how to set `*.image.repository`. + + + + Select your edition of ScalarDB Enterprise. + + + ```yaml + scalardbCluster: + image: + repository: "ghcr.io/scalar-labs/scalardb-cluster-node-byol-standard" + ``` + + + ```yaml + scalardbCluster: + image: + repository: "ghcr.io/scalar-labs/scalardb-cluster-node-byol-premium" + ``` + + + + + ```yaml + ledger: + image: + repository: "ghcr.io/scalar-labs/scalardl-ledger-byol" + ``` + + + ```yaml + auditor: + image: + repository: "ghcr.io/scalar-labs/scalardl-auditor-byol" + ``` + + + +## Set the license key in the `.properties` file + +To run the container images, you must set `license key` and `certificate` in your `.properties` file. Select your Scalar product to see how to set `license key` and `certificate`. If you don't have a license key, please [contact us](https://www.scalar-labs.com/contact). + + + + ```properties + scalar.db.cluster.node.licensing.license_key= + scalar.db.cluster.node.licensing.license_check_cert_pem= + ``` + + + ```properties + scalar.dl.licensing.license_key= + scalar.dl.licensing.license_check_cert_pem= + ``` + + + ```properties + scalar.dl.licensing.license_key= + scalar.dl.licensing.license_check_cert_pem= + ``` + + + +If you're using Scalar Helm Charts, you must set the properties in the custom values file for the product that you're using. Select your Scalar product to see how to set the properties in the custom values file. + + + + ```yaml + scalardbCluster: + scalardbClusterNodeProperties: | + scalar.db.cluster.node.licensing.license_key= + scalar.db.cluster.node.licensing.license_check_cert_pem= + ``` + + + ```yaml + ledger: + ledgerProperties: | + scalar.dl.licensing.license_key= + scalar.dl.licensing.license_check_cert_pem= + ``` + + + ```yaml + auditor: + auditorProperties: | + scalar.dl.licensing.license_key= + scalar.dl.licensing.license_check_cert_pem= + ``` + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/K8sLogCollectionGuide.mdx b/versioned_docs/version-3.9/scalar-kubernetes/K8sLogCollectionGuide.mdx new file mode 100644 index 00000000..f6013a54 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/K8sLogCollectionGuide.mdx @@ -0,0 +1,183 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Collecting logs from Scalar products on a Kubernetes cluster + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +This document explains how to deploy Grafana Loki and Promtail on Kubernetes with Helm. After following this document, you can collect logs of Scalar products on your Kubernetes environment. + +If you use a managed Kubernetes cluster and you want to use the cloud service features for monitoring and logging, please refer to the following document. + +* [Logging and monitoring on Amazon EKS](https://docs.aws.amazon.com/prescriptive-guidance/latest/implementing-logging-monitoring-cloudwatch/amazon-eks-logging-monitoring.html) +* [Monitoring Azure Kubernetes Service (AKS) with Azure Monitor](https://learn.microsoft.com/en-us/azure/aks/monitor-aks) + +## Prerequisites + +* Create a Kubernetes cluster. + * [Create an EKS cluster for Scalar products](CreateEKSClusterForScalarProducts.mdx) + * [Create an AKS cluster for Scalar products](CreateAKSClusterForScalarProducts.mdx) +* Create a Bastion server and set `kubeconfig`. + * [Create a bastion server](CreateBastionServer.mdx) +* Deploy Prometheus Operator (we use Grafana to explore collected logs) + * [Monitoring Scalar products on the Kubernetes cluster](K8sMonitorGuide.mdx) + +## Add the grafana helm repository + +This document uses Helm for the deployment of Prometheus Operator. + +```console +helm repo add grafana https://grafana.github.io/helm-charts +``` +```console +helm repo update +``` + +## Prepare a custom values file + +Please get the sample file [scalar-loki-stack-custom-values.yaml](https://github.com/scalar-labs/scalar-kubernetes/blob/master/conf/scalar-loki-stack-custom-values.yaml) for loki-stack. For the logging of Scalar products, this sample file's configuration is recommended. + +### Set nodeSelector in the custom values file (Optional) + +You might need to set nodeSelector in the custom values file (scalar-loki-stack-custom-values.yaml) as follows if you add labels to your Kubernetes worker node. See the following examples based on the product you're using. + + + + Select the ScalarDB product you're using. + + + ```yaml + promtail: + nodeSelector: + scalar-labs.com/dedicated-node: scalardb-cluster + ``` + + + ```yaml + promtail: + nodeSelector: + scalar-labs.com/dedicated-node: scalardb + ``` + + + + + Select the ScalarDL product you're using. + + + ```yaml + promtail: + nodeSelector: + scalar-labs.com/dedicated-node: scalardl-ledger + ``` + + + ```yaml + promtail: + nodeSelector: + scalar-labs.com/dedicated-node: scalardl-auditor + ``` + + + + + +### Set tolerations in the custom values file (Optional) + +You might need to set tolerations in the custom values file (scalar-loki-stack-custom-values.yaml) as follows if you add taints to your Kubernetes worker node. See the following examples based on the product you're using. + + + + Select the ScalarDB product you're using. + + + ```yaml + promtail: + tolerations: + - effect: NoSchedule + key: scalar-labs.com/dedicated-node + operator: Equal + value: scalardb-cluster + ``` + + + ```yaml + promtail: + tolerations: + - effect: NoSchedule + key: scalar-labs.com/dedicated-node + operator: Equal + value: scalardb + ``` + + + + + Select the ScalarDL product you're using. + + + ```yaml + promtail: + tolerations: + - effect: NoSchedule + key: scalar-labs.com/dedicated-node + operator: Equal + value: scalardl-ledger + ``` + + + ```yaml + promtail: + tolerations: + - effect: NoSchedule + key: scalar-labs.com/dedicated-node + operator: Equal + value: scalardl-auditor + ``` + + + + + +## Deploy Loki and Promtail + +It is recommended to deploy Loki and Promtail on the same namespace `monitoring` as Prometheus and Grafana. You have already created the `monitoring` namespace in the document [Monitoring Scalar products on the Kubernetes cluster](K8sMonitorGuide.mdx). + +```console +helm install scalar-logging-loki grafana/loki-stack -n monitoring -f scalar-loki-stack-custom-values.yaml +``` + +## Check if Loki and Promtail are deployed + +If the Loki and Promtail pods are deployed properly, you can see the `STATUS` is `Running` using the `kubectl get pod -n monitoring` command. Since promtail pods are deployed as DaemonSet, the number of promtail pods depends on the number of Kubernetes nodes. In the following example, there are three worker nodes for Scalar products in the Kubernetes cluster. + +```console +kubectl get pod -n monitoring +``` + +You should see the following output: + +```console +NAME READY STATUS RESTARTS AGE +scalar-logging-loki-0 1/1 Running 0 35m +scalar-logging-loki-promtail-2fnzn 1/1 Running 0 32m +scalar-logging-loki-promtail-2pwkx 1/1 Running 0 30m +scalar-logging-loki-promtail-gfx44 1/1 Running 0 32m +``` + +## View log in Grafana dashboard + +You can see the collected logs in the Grafana dashboard as follows. + +1. Access the Grafana dashboard +1. Go to the `Explore` page +1. Select `Loki` from the top left pull-down +1. Set conditions to query logs +1. Select the `Run query` button at the top right + +Please refer to the [Monitoring Scalar products on the Kubernetes cluster](K8sMonitorGuide.mdx) for more details on how to access the Grafana dashboard. diff --git a/versioned_docs/version-3.9/scalar-kubernetes/K8sMonitorGuide.mdx b/versioned_docs/version-3.9/scalar-kubernetes/K8sMonitorGuide.mdx new file mode 100644 index 00000000..66cb123b --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/K8sMonitorGuide.mdx @@ -0,0 +1,156 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Monitoring Scalar products on a Kubernetes cluster + +This document explains how to deploy Prometheus Operator on Kubernetes with Helm. After following this document, you can use Prometheus, Alertmanager, and Grafana for monitoring Scalar products on your Kubernetes environment. + +If you use a managed Kubernetes cluster and you want to use the cloud service features for monitoring and logging, please refer to the following document. + +* [Logging and monitoring on Amazon EKS](https://docs.aws.amazon.com/prescriptive-guidance/latest/implementing-logging-monitoring-cloudwatch/amazon-eks-logging-monitoring.html) +* [Monitoring Azure Kubernetes Service (AKS) with Azure Monitor](https://learn.microsoft.com/en-us/azure/aks/monitor-aks) + +## Prerequisites + +* Create a Kubernetes cluster. + * [Create an EKS cluster for Scalar products](CreateEKSClusterForScalarProducts.mdx) + * [Create an AKS cluster for Scalar products](CreateAKSClusterForScalarProducts.mdx) +* Create a Bastion server and set `kubeconfig`. + * [Create a bastion server](CreateBastionServer.mdx) + +## Add the prometheus-community helm repository + +This document uses Helm for the deployment of Prometheus Operator. + +```console +helm repo add prometheus-community https://prometheus-community.github.io/helm-charts +``` +```console +helm repo update +``` + +## Prepare a custom values file + +Please get the sample file [scalar-prometheus-custom-values.yaml](https://github.com/scalar-labs/scalar-kubernetes/blob/master/conf/scalar-prometheus-custom-values.yaml) for kube-prometheus-stack. For the monitoring of Scalar products, this sample file's configuration is recommended. + +In this sample file, the Service resources are not exposed to access from outside of a Kubernetes cluster. If you want to access dashboards from outside of your Kubernetes cluster, you must set `*.service.type` to `LoadBalancer` or `*.ingress.enabled` to `true`. + +Please refer to the following official document for more details on the configurations of kube-prometheus-stack. + +* [kube-prometheus-stack - Configuration](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#configuration) + +## Deploy Prometheus Operator + +Scalar products assume the Prometheus Operator is deployed in the `monitoring` namespace by default. So, please create the namespace `monitoring` and deploy Prometheus Operator in the `monitoring` namespace. + +1. Create a namespace `monitoring` on Kubernetes. + ```console + kubectl create namespace monitoring + ``` + +1. Deploy the kube-prometheus-stack. + ```console + helm install scalar-monitoring prometheus-community/kube-prometheus-stack -n monitoring -f scalar-prometheus-custom-values.yaml + ``` + +## Check if the Prometheus Operator is deployed + +If the Prometheus Operator (includes Prometheus, Alertmanager, and Grafana) pods are deployed properly, you can see the `STATUS` is `Running` using the following command: + +```console +kubectl get pod -n monitoring +``` + +You should see the following output: + +```console +NAME READY STATUS RESTARTS AGE +alertmanager-scalar-monitoring-kube-pro-alertmanager-0 2/2 Running 0 55s +prometheus-scalar-monitoring-kube-pro-prometheus-0 2/2 Running 0 55s +scalar-monitoring-grafana-cb4f9f86b-jmkpz 3/3 Running 0 62s +scalar-monitoring-kube-pro-operator-865bbb8454-9ppkc 1/1 Running 0 62s +``` + +## Deploy (or Upgrade) Scalar products using Helm Charts + +1. To enable Prometheus monitoring for Scalar products, you must set `true` to the following configurations in the custom values file. + + * Configurations + * `*.prometheusRule.enabled` + * `*.grafanaDashboard.enabled` + * `*.serviceMonitor.enabled` + + Please refer to the following documents for more details on the custom values file of each Scalar product. + + * [ScalarDB Cluster](../helm-charts/configure-custom-values-scalardb-cluster.mdx#prometheus-and-grafana-configurations-recommended-in-production-environments) + * [(Deprecated) ScalarDB Server](../helm-charts/configure-custom-values-scalardb.mdx#prometheusgrafana-configurations-recommended-in-the-production-environment) + * [(Deprecated) ScalarDB GraphQL](../helm-charts/configure-custom-values-scalardb-graphql.mdx#prometheusgrafana-configurations-recommended-in-the-production-environment) + * [ScalarDL Ledger](../helm-charts/configure-custom-values-scalardl-ledger.mdx#prometheusgrafana-configurations-recommended-in-the-production-environment) + * [ScalarDL Auditor](../helm-charts/configure-custom-values-scalardl-auditor.mdx#prometheusgrafana-configurations-recommended-in-the-production-environment) + +1. Deploy (or Upgrade) Scalar products using Helm Charts with the above custom values file. + + Please refer to the following documents for more details on how to deploy/upgrade Scalar products. + + * [ScalarDB Cluster](../helm-charts/how-to-deploy-scalardb-cluster.mdx) + * [(Deprecated) ScalarDB Server](../helm-charts/how-to-deploy-scalardb.mdx) + * [(Deprecated) ScalarDB GraphQL](../helm-charts/how-to-deploy-scalardb-graphql.mdx) + * [ScalarDL Ledger](../helm-charts/how-to-deploy-scalardl-ledger.mdx) + * [ScalarDL Auditor](../helm-charts/how-to-deploy-scalardl-auditor.mdx) + +## How to access dashboards + +When you set `*.service.type` to `LoadBalancer` or `*.ingress.enabled` to `true`, you can access dashboards via Service or Ingress of Kubernetes. The concrete implementation and access method depend on the Kubernetes cluster. If you use a managed Kubernetes cluster, please refer to the cloud provider's official document for more details. + +* EKS + * [Network load balancing on Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html) + * [Application load balancing on Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html) +* AKS + * [Use a public standard load balancer in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard) + * [Create an ingress controller in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/ingress-basic) + +## Access the dashboard from your local machine (For testing purposes only / Not recommended in the production environment) + +You can access each dashboard from your local machine using the `kubectl port-forward` command. + +1. Port forwarding to each service from your local machine. + * Prometheus + ```console + kubectl port-forward -n monitoring svc/scalar-monitoring-kube-pro-prometheus 9090:9090 + ``` + * Alertmanager + ```console + kubectl port-forward -n monitoring svc/scalar-monitoring-kube-pro-alertmanager 9093:9093 + ``` + * Grafana + ```console + kubectl port-forward -n monitoring svc/scalar-monitoring-grafana 3000:3000 + ``` + +1. Access each Dashboard. + * Prometheus + ```console + http://localhost:9090/ + ``` + * Alertmanager + ```console + http://localhost:9093/ + ``` + * Grafana + ```console + http://localhost:3000/ + ``` + * Note: + * You can see the user and password of Grafana as follows. + * user + ```console + kubectl get secrets scalar-monitoring-grafana -n monitoring -o jsonpath='{.data.admin-user}' | base64 -d + ``` + * password + ```console + kubectl get secrets scalar-monitoring-grafana -n monitoring -o jsonpath='{.data.admin-password}' | base64 -d + ``` diff --git a/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDBClusterOnEKS.mdx b/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDBClusterOnEKS.mdx new file mode 100644 index 00000000..bee02994 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDBClusterOnEKS.mdx @@ -0,0 +1,66 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Deploy ScalarDB Cluster on Amazon Elastic Kubernetes Service (EKS) + +This guide explains how to deploy ScalarDB Cluster on Amazon Elastic Kubernetes Service (EKS). + +In this guide, you will create one of the following two environments in your AWS environment. The environments differ depending on which [client mode](https://scalardb.scalar-labs.com/docs/latest/scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api#client-modes) you use: + +* **[`direct-kubernetes` client mode](https://scalardb.scalar-labs.com/docs/latest/scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api#direct-kubernetes-client-mode).** In this mode, you deploy your application in the same EKS cluster as your ScalarDB Cluster deployment. + + ![image](images/png/EKS_ScalarDB_Cluster_Direct_Kubernetes_Mode.drawio.png) + +* **[`indirect` client mode](https://scalardb.scalar-labs.com/docs/latest/scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api#indirect-client-mode).** In this mode, you deploy your application in an environment that is different from the EKS cluster that contains your ScalarDB Cluster deployment. + + ![image](images/png/EKS_ScalarDB_Cluster_Indirect_Mode.drawio.png) + +## Step 1. Subscribe to ScalarDB Cluster in AWS Marketplace + +You must get the ScalarDB Cluster container image by visiting AWS Marketplace and subscribing to [ScalarDB Cluster Standard Edition (Pay-As-You-Go)](https://aws.amazon.com/marketplace/pp/prodview-jx6qxatkxuwm4) or [ScalarDB Cluster Premium Edition (Pay-As-You-Go)](https://aws.amazon.com/marketplace/pp/prodview-djqw3zk6dwyk6). For details on how to subscribe to ScalarDB Cluster in AWS Marketplace, see [Subscribe to Scalar products from AWS Marketplace](AwsMarketplaceGuide.mdx#subscribe-to-scalar-products-from-aws-marketplace). + +## Step 2. Create an EKS cluster + +You must create an EKS cluster for the ScalarDB Cluster deployment. For details, see [Guidelines for creating an Amazon EKS cluster for Scalar products](CreateEKSClusterForScalarProducts.mdx). + +## Step 3. Set up a database for ScalarDB Cluster + +You must prepare a database before deploying ScalarDB Cluster. To see which types of databases ScalarDB supports, refer to [ScalarDB Supported Databases](https://scalardb.scalar-labs.com/docs/latest/requirements#databases). + +For details on setting up a database, see [Set up a database for ScalarDB/ScalarDL deployment on AWS](SetupDatabaseForAWS.mdx). + +## Step 4. Create a bastion server + +To execute some tools for deploying and managing ScalarDB Cluster on EKS, you must prepare a bastion server in the same Amazon Virtual Private Cloud (VPC) of the EKS cluster that you created in **Step 2**. For details, see [Create a Bastion Server](CreateBastionServer.mdx). + +## Step 5. Prepare a custom values file for the Scalar Helm Chart + +To perform tasks, like accessing information in the database that you created in **Step 3**, you must configure a custom values file for the Scalar Helm Chart for ScalarDB Cluster based on your environment. For details, see [Configure a custom values file for Scalar Helm Charts](../helm-charts/configure-custom-values-file.mdx). + +**Note:** If you deploy your application in an environment that is different from the EKS cluster that has your ScalarDB Cluster deployment (i.e., you use `indirect` client mode), you must set the `envoy.enabled` parameter to `true` and the `envoy.service.type` parameter to `LoadBalancer` to access Scalar Envoy from your application. + +## Step 6. Deploy ScalarDB Cluster by using the Scalar Helm Chart + +Deploy ScalarDB Cluster on your EKS cluster by using the Helm Chart for ScalarDB Cluster. For details, see [Deploy Scalar products using Scalar Helm Charts](../helm-charts/how-to-deploy-scalar-products.mdx). + +**Note:** We recommend creating a dedicated namespace by using the `kubectl create ns scalardb-cluster` command and deploying ScalarDB Cluster in the namespace by using the `-n scalardb-cluster` option with the `helm install` command. + +## Step 7. Check the status of your ScalarDB Cluster deployment + +After deploying ScalarDB Cluster in your EKS cluster, you must check the status of each component. For details, see [Components to Regularly Check When Running in a Kubernetes Environment](RegularCheck.mdx). + +## Step 8. Monitor your ScalarDB Cluster deployment + +After deploying ScalarDB Cluster in your EKS cluster, we recommend monitoring the deployed components and collecting their logs, especially in production. For details, see [Monitoring Scalar products on a Kubernetes cluster](K8sMonitorGuide.mdx) and [Collecting logs from Scalar products on a Kubernetes cluster](K8sLogCollectionGuide.mdx). + +## Step 9. Deploy your application + +If you use [`direct-kubernetes` client mode](https://scalardb.scalar-labs.com/docs/latest/scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api#direct-kubernetes-client-mode), you must deploy additional Kubernetes resources. For details, see [Deploy your client application on Kubernetes with `direct-kubernetes` mode](../helm-charts/how-to-deploy-scalardb-cluster.mdx#deploy-your-client-application-on-kubernetes-with-direct-kubernetes-mode). + +## Remove ScalarDB Cluster from EKS + +If you want to remove the environment that you created, please remove all the resources in reverse order from which you created them in. diff --git a/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDBServerOnAKS.mdx b/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDBServerOnAKS.mdx new file mode 100644 index 00000000..5a9e1d90 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDBServerOnAKS.mdx @@ -0,0 +1,63 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium + - Deprecated +displayed_sidebar: docsEnglish +--- + +# [Deprecated] Deploy ScalarDB Server on Azure Kubernetes Service (AKS) + +This guide explains how to deploy ScalarDB Server on Azure Kubernetes Service (AKS). + +In this guide, you will create one of the following two environments in your Azure environment. The difference between the two environments is how you plan to deploy the application: + +* Deploy your application in the same AKS cluster as your ScalarDB Server deployment. In this case, you don't need to use the load balancers that Azure provides to access Scalar Envoy from your application. + + ![image](images/png/AKS_ScalarDB_Server_App_In_Cluster.drawio.png) + +* Deploy your application in an environment that is different from the AKS cluster that contains your ScalarDB Server deployment. In this case, you must use the load balancers that Azure provides to access Scalar Envoy from your application. + + ![image](images/png/AKS_ScalarDB_Server_App_Out_Cluster.drawio.png) + +## Step 1. Subscribe to ScalarDB Server in Azure Marketplace + +You must get the ScalarDB Server container image by visiting [Azure Marketplace](https://azuremarketplace.microsoft.com/en/marketplace/apps/scalarinc.scalardb) and subscribing to ScalarDB Server. For details on how to subscribe to ScalarDB Server in Azure Marketplace, see [Get Scalar products from Microsoft Azure Marketplace](AzureMarketplaceGuide.mdx#get-scalar-products-from-microsoft-azure-marketplace). + +## Step 2. Create an AKS cluster + +You must create an AKS cluster for the ScalarDB Server deployment. For details, see [Guidelines for creating an AKS cluster for Scalar products](CreateAKSClusterForScalarProducts.mdx). + +## Step 3. Set up a database for ScalarDB Server + +You must prepare a database before deploying ScalarDB Server. To see which types of databases ScalarDB supports, refer to [ScalarDB Supported Databases](https://scalardb.scalar-labs.com/docs/latest/requirements#databases). + +For details on setting up a database, see [Set up a database for ScalarDB/ScalarDL deployment in Azure](SetupDatabaseForAzure.mdx). + +## Step 4. Create a bastion server + +To execute some tools for deploying and managing ScalarDB Server on AKS, you must prepare a bastion server in the same Azure Virtual Network (VNet) of the AKS cluster that you created in **Step 2**. For details, see [Create a Bastion Server](CreateBastionServer.mdx). + +## Step 5. Prepare a custom values file for the Scalar Helm Chart + +To perform tasks, like accessing information in the database that you created in **Step 3**, you must configure a custom values file for the Scalar Helm Chart for ScalarDB Server based on your environment. For details, see [Configure a custom values file of Scalar Helm Chart](../helm-charts/configure-custom-values-file.mdx). + +**Note:** If you deploy your application in an environment that is different from the AKS cluster that has your ScalarDB Server deployment, you must set the `envoy.service.type` parameter to `LoadBalancer` to access Scalar Envoy from your application. + +## Step 6. Deploy ScalarDB Server by using the Scalar Helm Chart + +Deploy ScalarDB Server on your AKS cluster by using the Helm Chart for ScalarDB Server. For details, see [Deploy Scalar Products using Scalar Helm Chart](../helm-charts/how-to-deploy-scalar-products.mdx). + +**Note:** We recommend creating a dedicated namespace by using the `kubectl create ns scalardb` command and deploying ScalarDB Server in the namespace by using the `-n scalardb` option with the `helm install` command. + +## Step 7. Check the status of your ScalarDB Server deployment + +After deploying ScalarDB Server in your AKS cluster, you must check the status of each component. For details, see [Components to Regularly Check When Running in a Kubernetes Environment](RegularCheck.mdx). + +## Step 8. Monitor your ScalarDB Server deployment + +After deploying ScalarDB Server in your AKS cluster, we recommend monitoring the deployed components and collecting their logs, especially in production. For details, see [Monitoring Scalar products on a Kubernetes cluster](K8sMonitorGuide.mdx) and [Collecting logs from Scalar products on a Kubernetes cluster](K8sLogCollectionGuide.mdx). + +## Remove ScalarDB Server from AKS + +If you want to remove the environment that you created, please remove all the resources in reverse order from which you created them in. diff --git a/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDBServerOnEKS.mdx b/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDBServerOnEKS.mdx new file mode 100644 index 00000000..e2781658 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDBServerOnEKS.mdx @@ -0,0 +1,63 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium + - Deprecated +displayed_sidebar: docsEnglish +--- + +# Deploy ScalarDB Server on Amazon Elastic Kubernetes Service (EKS) + +This guide explains how to deploy ScalarDB Server on Amazon Elastic Kubernetes Service (EKS). + +In this guide, you will create one of the following two environments in your AWS environment. The difference between the two environments is how you plan to deploy the application: + +* Deploy your application in the same EKS cluster as your ScalarDB Server deployment. In this case, you don't need to use the load balancers that AWS provides to access Scalar Envoy from your application. + + ![image](images/png/EKS_ScalarDB_Server_App_In_Cluster.drawio.png) + +* Deploy your application in an environment that is different from the EKS cluster that contains your ScalarDB Server deployment. In this case, you must use the load balancers that AWS provides to access Scalar Envoy from your application. + + ![image](images/png/EKS_ScalarDB_Server_App_Out_Cluster.drawio.png) + +## Step 1. Subscribe to ScalarDB Server in AWS Marketplace + +You must get the ScalarDB Server container image by visiting [AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-rzbuhxgvqf4d2) and subscribing to ScalarDB Server. For details on how to subscribe to ScalarDB Server in AWS Marketplace, see [Subscribe to Scalar products from AWS Marketplace](AwsMarketplaceGuide.mdx#subscribe-to-scalar-products-from-aws-marketplace). + +## Step 2. Create an EKS cluster + +You must create an EKS cluster for the ScalarDB Server deployment. For details, see [Guidelines for creating an Amazon EKS cluster for Scalar products](CreateEKSClusterForScalarProducts.mdx). + +## Step 3. Set up a database for ScalarDB Server + +You must prepare a database before deploying ScalarDB Server. To see which types of databases ScalarDB supports, refer to [ScalarDB Supported Databases](https://scalardb.scalar-labs.com/docs/latest/requirements#databases). + +For details on setting up a database, see [Set up a database for ScalarDB/ScalarDL deployment on AWS](SetupDatabaseForAWS.mdx). + +## Step 4. Create a bastion server + +To execute some tools for deploying and managing ScalarDB Server on EKS, you must prepare a bastion server in the same Amazon Virtual Private Cloud (VPC) of the EKS cluster that you created in **Step 2**. For details, see [Create a Bastion Server](CreateBastionServer.mdx). + +## Step 5. Prepare a custom values file for the Scalar Helm Chart + +To perform tasks, like accessing information in the database that you created in **Step 3**, you must configure a custom values file for the Scalar Helm Chart for ScalarDB Server based on your environment. For details, see [Configure a custom values file for Scalar Helm Charts](../helm-charts/configure-custom-values-file.mdx). + +**Note:** If you deploy your application in an environment that is different from the EKS cluster that has your ScalarDB Server deployment, you must set the `envoy.service.type` parameter to `LoadBalancer` to access Scalar Envoy from your application. + +## Step 6. Deploy ScalarDB Server by using the Scalar Helm Chart + +Deploy ScalarDB Server on your EKS cluster by using the Helm Chart for ScalarDB Server. For details, see [Deploy Scalar products using Scalar Helm Charts](../helm-charts/how-to-deploy-scalar-products.mdx). + +**Note:** We recommend creating a dedicated namespace by using the `kubectl create ns scalardb` command and deploying ScalarDB Server in the namespace by using the `-n scalardb` option with the `helm install` command. + +## Step 7. Check the status of your ScalarDB Server deployment + +After deploying ScalarDB Server in your EKS cluster, you must check the status of each component. For details, see [Components to Regularly Check When Running in a Kubernetes Environment](RegularCheck.mdx). + +## Step 8. Monitor your ScalarDB Server deployment + +After deploying ScalarDB Server in your EKS cluster, we recommend monitoring the deployed components and collecting their logs, especially in production. For details, see [Monitoring Scalar products on a Kubernetes cluster](K8sMonitorGuide.mdx) and [Collecting logs from Scalar products on a Kubernetes cluster](K8sLogCollectionGuide.mdx). + +## Remove ScalarDB Server from EKS + +If you want to remove the environment that you created, please remove all the resources in reverse order from which you created them in. diff --git a/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDLAuditorOnAKS.mdx b/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDLAuditorOnAKS.mdx new file mode 100644 index 00000000..6bae5bcd --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDLAuditorOnAKS.mdx @@ -0,0 +1,99 @@ +--- +displayed_sidebar: docsEnglish +--- + +# Deploy ScalarDL Ledger and ScalarDL Auditor on Azure Kubernetes Service (AKS) + +This guide explains how to deploy ScalarDL Ledger and ScalarDL Auditor on Azure Kubernetes Service (AKS). + +In this guide, you will create one of the following three environments in your Azure environment. To make Byzantine fault detection work properly, we recommend deploying ScalarDL Ledger and ScalarDL Auditor on different administrative domains (i.e., separate environments). + +* Use different Azure accounts (most recommended way) + + ![image](images/png/AKS_ScalarDL_Auditor_Multi_Account.drawio.png) + +* Use different Azure Virtual Networks (VNets) (second recommended way) + + ![image](images/png/AKS_ScalarDL_Auditor_Multi_VNet.drawio.png) + +* Use different namespaces (third recommended way) + + ![image](images/png/AKS_ScalarDL_Auditor_Multi_Namespace.drawio.png) + +**Note:** This guide follows the second recommended way, "Use different VNets." + +## Step 1. Get the ScalarDL Ledger and ScalarDL Auditor container images + +You must get the ScalarDL Ledger and ScalarDL Auditor container images. For details, see [How to get the container images of Scalar products](HowToGetContainerImages.mdx). + +## Step 2. Create an AKS cluster for ScalarDL Ledger + +You must create an AKS cluster for the ScalarDL Ledger deployment. For details, see [Guidelines for creating an AKS cluster for Scalar products](CreateAKSClusterForScalarProducts.mdx). + +## Step 3. Create an AKS cluster for ScalarDL Auditor + +You must also create an AKS cluster for the ScalarDL Auditor deployment. For details, see [Guidelines for creating an AKS cluster for Scalar products](CreateAKSClusterForScalarProducts.mdx). + +## Step 4. Set up a database for ScalarDL Ledger + +You must prepare a database before deploying ScalarDL Ledger. Because ScalarDL Ledger uses ScalarDB internally to access databases, refer to [ScalarDB Supported Databases](https://scalardb.scalar-labs.com/docs/latest/requirements#databases) to see which types of databases ScalarDB supports. + +For details on setting up a database, see [Set up a database for ScalarDB/ScalarDL deployment in Azure](SetupDatabaseForAzure.mdx). + +## Step 5. Set up a database for ScalarDL Auditor + +You must also prepare a database before deploying ScalarDL Auditor. Because ScalarDL Auditor uses ScalarDB internally to access databases, refer to [ScalarDB Supported Databases](https://scalardb.scalar-labs.com/docs/latest/requirements#databases) to see which types of databases ScalarDB supports. + +For details on setting up a database, see [Set up a database for ScalarDB/ScalarDL deployment in Azure](SetupDatabaseForAzure.mdx). + +## Step 6. Create a bastion server for ScalarDL Ledger + +To execute some tools for deploying and managing ScalarDL Ledger on AKS, you must prepare a bastion server in the same VNet of the AKS cluster that you created in **Step 2**. For details, see [Create a Bastion Server](CreateBastionServer.mdx). + +## Step 7. Create a bastion server for ScalarDL Auditor + +To execute some tools for deploying and managing ScalarDL Auditor on AKS, you must prepare a bastion server in the same VNet of the AKS cluster that you created in **Step 3**. For details, see [Create a Bastion Server](CreateBastionServer.mdx). + +## Step 8. Create network peering between two AKS clusters + +To make ScalarDL work properly, ScalarDL Ledger and ScalarDL Auditor need to connect to each other. You must connect two VNets by using [virtual network peering](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview). For details, see [Configure Network Peering for ScalarDL Auditor Mode](NetworkPeeringForScalarDLAuditor.mdx). + +## Step 9. Prepare custom values files for the Scalar Helm Charts for both ScalarDL Ledger and ScalarDL Schema Loader + +To perform tasks, like accessing information in the database that you created in **Step 4**, you must configure custom values files for the Scalar Helm Charts for both ScalarDL Ledger and ScalarDL Schema Loader (for Ledger) based on your environment. For details, see [Configure a custom values file for Scalar Helm Charts](../helm-charts/configure-custom-values-file.mdx). + +## Step 10. Deploy ScalarDL Ledger by using the Scalar Helm Chart + +Deploy ScalarDL Ledger on your AKS cluster by using the Helm Chart for ScalarDL Ledger. For details, see [Deploy Scalar products using Scalar Helm Charts](../helm-charts/how-to-deploy-scalar-products.mdx). + +**Note:** We recommend creating a dedicated namespace by using the `kubectl create ns scalardl-ledger` command and deploying ScalarDL Ledger in the namespace by using the `-n scalardl-ledger` option with the `helm install` command. + +## Step 11. Prepare custom values files for the Scalar Helm Charts for both ScalarDL Auditor and ScalarDL Schema Loader + +To perform tasks, like accessing information in the database that you created in **Step 5**, you must also configure a custom values files for the Scalar Helm Chart for both ScalarDL Auditor and ScalarDL Schema Loader (for Auditor) based on your environment. For details, see [Configure a custom values file for Scalar Helm Charts](../helm-charts/configure-custom-values-file.mdx). + +## Step 12. Deploy ScalarDL Auditor by using the Scalar Helm Chart + +Deploy ScalarDL Auditor on your AKS cluster by using the Helm Chart for ScalarDL Auditor. For details, see [Deploy Scalar products using Scalar Helm Charts](../helm-charts/how-to-deploy-scalar-products.mdx). + +**Note:** We recommend creating a dedicated namespace by using the `kubectl create ns scalardl-auditor` command and deploying ScalarDL Auditor in the namespace by using the `-n scalardl-auditor` option with the `helm install` command. + +## Step 13. Check the status of your ScalarDL Ledger deployment + +After deploying ScalarDL Ledger in your AKS cluster, you must check the status of each component. For details, see [Components to Regularly Check When Running in a Kubernetes Environment](RegularCheck.mdx). + +## Step 14. Check the status of your ScalarDL Auditor deployment + +After deploying ScalarDL Auditor in your AKS cluster, you must check the status of each component. For details, see [Components to Regularly Check When Running in a Kubernetes Environment](RegularCheck.mdx). + +## Step 15. Monitor your ScalarDL Ledger deployment + +After deploying ScalarDL Ledger in your AKS cluster, we recommend monitoring the deployed components and collecting their logs, especially in production. For details, see [Monitoring Scalar products on a Kubernetes cluster](K8sMonitorGuide.mdx) and [Collecting logs from Scalar products on a Kubernetes cluster](K8sLogCollectionGuide.mdx). + +## Step 16. Monitor your ScalarDL Auditor deployment + +After deploying ScalarDL Auditor in your AKS cluster, we recommend monitoring the deployed components and collecting their logs, especially in production. For details, see [Monitoring Scalar products on a Kubernetes cluster](K8sMonitorGuide.mdx) and [Collecting logs from Scalar products on a Kubernetes cluster](K8sLogCollectionGuide.mdx). + +## Remove ScalarDL Ledger and ScalarDL Auditor from AKS + +If you want to remove the environment that you created, please remove all the resources in reverse order from which you created them in. diff --git a/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDLAuditorOnEKS.mdx b/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDLAuditorOnEKS.mdx new file mode 100644 index 00000000..64c6101c --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDLAuditorOnEKS.mdx @@ -0,0 +1,99 @@ +--- +displayed_sidebar: docsEnglish +--- + +# Deploy ScalarDL Ledger and ScalarDL Auditor on Amazon Elastic Kubernetes Service (EKS) + +This guide explains how to deploy ScalarDL Ledger and ScalarDL Auditor on Amazon Elastic Kubernetes Service (EKS). + +In this guide, you will create one of the following three environments in your AWS environment. To make Byzantine fault detection work properly, we recommend deploying ScalarDL Ledger and ScalarDL Auditor on different administrative domains (i.e., separate environments). + +* Use different AWS accounts (most recommended way) + + ![image](images/png/EKS_ScalarDL_Auditor_Multi_Account.drawio.png) + +* Use different Amazon Virtual Private Clouds (VPCs) (second recommended way) + + ![image](images/png/EKS_ScalarDL_Auditor_Multi_VPC.drawio.png) + +* Use different namespaces (third recommended way) + + ![image](images/png/EKS_ScalarDL_Auditor_Multi_Namespace.drawio.png) + +**Note:** This guide follows the second recommended way, "Use different VPCs." + +## Step 1. Subscribe to ScalarDL Ledger and ScalarDL Auditor in AWS Marketplace + +You must get the ScalarDL Ledger and ScalarDL Auditor container images from [AWS Marketplace](https://aws.amazon.com/marketplace/seller-profile?id=bd4cd7de-49cd-433f-97ba-5cf71d76ec7b) and subscribe to ScalarDL Ledger and ScalarDL Auditor. For details on how to subscribe to ScalarDL Ledger and ScalarDL Auditor in AWS Marketplace, see [Subscribe to Scalar products from AWS Marketplace](AwsMarketplaceGuide.mdx#subscribe-to-scalar-products-from-aws-marketplace). + +## Step 2. Create an EKS cluster for ScalarDL Ledger + +You must create an EKS cluster for the ScalarDL Ledger deployment. For details, see [Guidelines for creating an Amazon EKS cluster for Scalar products](CreateEKSClusterForScalarProducts.mdx). + +## Step 3. Create an EKS cluster for ScalarDL Auditor + +You must also create an EKS cluster for the ScalarDL Auditor deployment. For details, see [Guidelines for creating an Amazon EKS cluster for Scalar products](CreateEKSClusterForScalarProducts.mdx). + +## Step 4. Set up a database for ScalarDL Ledger + +You must prepare a database before deploying ScalarDL Ledger. Because ScalarDL Ledger uses ScalarDB internally to access databases, refer to [ScalarDB Supported Databases](https://scalardb.scalar-labs.com/docs/latest/requirements#databases) to see which types of databases ScalarDB supports. + +For details on setting up a database, see [Set up a database for ScalarDB/ScalarDL deployment on AWS](SetupDatabaseForAWS.mdx). + +## Step 5. Set up a database for ScalarDL Auditor + +You must also prepare a database before deploying ScalarDL Auditor. Because ScalarDL Auditor uses ScalarDB internally to access databases, refer to [ScalarDB Supported Databases](https://scalardb.scalar-labs.com/docs/latest/requirements#databases) to see which types of databases ScalarDB supports. + +For details on setting up a database, see [Set up a database for ScalarDB/ScalarDL deployment on AWS](SetupDatabaseForAWS.mdx). + +## Step 6. Create a bastion server for ScalarDL Ledger + +To execute some tools for deploying and managing ScalarDL Ledger on EKS, you must prepare a bastion server in the same VPC of the EKS cluster that you created in **Step 2**. For details, see [Create a Bastion Server](CreateBastionServer.mdx). + +## Step 7. Create a bastion server for ScalarDL Auditor + +To execute some tools for deploying and managing ScalarDL Auditor on EKS, you must prepare a bastion server in the same VPC of the EKS cluster that you created in **Step 3**. For details, see [Create a Bastion Server](CreateBastionServer.mdx). + +## Step 8. Create network peering between two EKS clusters + +To make ScalarDL work properly, ScalarDL Ledger and ScalarDL Auditor need to connect to each other. You must connect two VPCs by using [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/create-vpc-peering-connection.html). For details, see [Configure network peering for ScalarDL Auditor mode](NetworkPeeringForScalarDLAuditor.mdx). + +## Step 9. Prepare custom values files for the Scalar Helm Charts for ScalarDL Ledger and ScalarDL Schema Loader + +To perform tasks, like accessing information in the database that you created in **Step 4**, you must configure custom values files for the Scalar Helm Charts for ScalarDL Ledger and ScalarDL Schema Loader (for Ledger) based on your environment. For details, see [Configure a custom values file for Scalar Helm Charts](../helm-charts/configure-custom-values-file.mdx). + +## Step 10. Deploy ScalarDL Ledger by using the Scalar Helm Chart + +Deploy ScalarDL Ledger in your EKS cluster by using the Helm Chart for ScalarDL Ledger. For details, see [Deploy Scalar products using Scalar Helm Charts](../helm-charts/how-to-deploy-scalar-products.mdx). + +**Note:** We recommend creating a dedicated namespace by using the `kubectl create ns scalardl-ledger` command and deploying ScalarDL Ledger in the namespace by using the `-n scalardl-ledger` option with the `helm install` command. + +## Step 11. Prepare custom values files for the Scalar Helm Charts for both ScalarDL Auditor and ScalarDL Schema Loader + +To perform tasks, like accessing information in the database that you created in **Step 5**, you must configure custom values files for the Scalar Helm Charts for both ScalarDL Auditor and ScalarDL Schema Loader (for Auditor) based on your environment. For details, see [Configure a custom values file for Scalar Helm Charts](../helm-charts/configure-custom-values-file.mdx). + +## Step 12. Deploy ScalarDL Auditor by using the Scalar Helm Chart + +Deploy ScalarDL Auditor in your EKS cluster by using the Helm Chart for ScalarDL Auditor. For details , see [Deploy Scalar products using Scalar Helm Charts](../helm-charts/how-to-deploy-scalar-products.mdx). + +**Note:** We recommend creating a dedicated namespace by using the `kubectl create ns scalardl-auditor` command and deploying ScalarDL Auditor in the namespace by using the `-n scalardl-auditor` option with the `helm install` command. + +## Step 13. Check the status of your ScalarDL Ledger deployment + +After deploying ScalarDL Ledger in your EKS cluster, you must check the status of each component. For details, see [Components to Regularly Check When Running in a Kubernetes Environment](RegularCheck.mdx) for more details. + +## Step 14. Check the status of your ScalarDL Auditor deployment + +After deploying ScalarDL Auditor on your EKS cluster, you need to check the status of each component. See [Components to Regularly Check When Running in a Kubernetes Environment](RegularCheck.mdx) for more details. + +## Step 15. Monitor your ScalarDL Ledger deployment + +After deploying ScalarDL Ledger in your EKS cluster, we recommend monitoring the deployed components and collecting their logs, especially in production. For details, see [Monitoring Scalar products on a Kubernetes cluster](K8sMonitorGuide.mdx) and [Collecting logs from Scalar products on a Kubernetes cluster](K8sLogCollectionGuide.mdx). + +## Step 16. Monitor your ScalarDL Auditor deployment + +After deploying ScalarDL Auditor in your EKS cluster, we recommend monitoring the deployed components and collecting their logs, especially in production. For details, see [Monitoring Scalar products on a Kubernetes cluster](K8sMonitorGuide.mdx) and [Collecting logs from Scalar products on a Kubernetes cluster](K8sLogCollectionGuide.mdx). + +## Remove ScalarDL Ledger and ScalarDL Auditor from EKS + +If you want to remove the environment you created, please remove all the resources in reverse order from which you created them in. diff --git a/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDLOnAKS.mdx b/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDLOnAKS.mdx new file mode 100644 index 00000000..554c14ea --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDLOnAKS.mdx @@ -0,0 +1,51 @@ +--- +displayed_sidebar: docsEnglish +--- + +# Deploy ScalarDL Ledger on Azure Kubernetes Service (AKS) + +This document explains how to deploy **ScalarDL Ledger** on Azure Kubernetes Service (AKS). + +In this guide, you will create the following environment in your Azure environment. + +![image](images/png/AKS_ScalarDL_Ledger.drawio.png) + +## Step 1. Get the ScalarDL Ledger container image + +You must get the ScalarDL Ledger container image. For details, see [How to get the container images of Scalar products](HowToGetContainerImages.mdx). + +## Step 2. Create an AKS cluster + +You must create an AKS cluster for the ScalarDL Ledger deployment. For details, see [Guidelines for creating an AKS cluster for Scalar products](CreateAKSClusterForScalarProducts.mdx). + +## Step 3. Set up a database for ScalarDL Ledger + +You must prepare a database before deploying ScalarDL Ledger. Because ScalarDL Ledger uses ScalarDB internally to access databases, refer to [ScalarDB Supported Databases](https://scalardb.scalar-labs.com/docs/latest/requirements#databases) to see which types of databases ScalarDB supports. + +For details on setting up a database, see [Set up a database for ScalarDB/ScalarDL deployment in Azure](SetupDatabaseForAzure.mdx). + +## Step 4. Create a bastion server + +To execute some tools for deploying and managing ScalarDL Ledger on AKS, you must prepare a bastion server in the same Azure Virtual Network (VNet) of the AKS cluster that you created in **Step 2**. For details, see [Create a Bastion Server](CreateBastionServer.mdx). + +## Step 5. Prepare custom values files for the Scalar Helm Charts for both ScalarDL Ledger and ScalarDL Schema Loader + +To perform tasks, like accessing information in the database that you created in **Step 3**, you must configure custom values files for the Scalar Helm Charts for both ScalarDL Ledger and ScalarDL Schema Loader (for Ledger) based on your environment. For details, see [Configure a custom values file for Scalar Helm Charts](../helm-charts/configure-custom-values-file.mdx). + +## Step 6. Deploy ScalarDL Ledger by using the Scalar Helm Chart + +Deploy ScalarDL Ledger in your AKS cluster by using the Helm Chart for ScalarDL Ledger. For details, see [Deploy Scalar products using Scalar Helm Charts](../helm-charts/how-to-deploy-scalar-products.mdx). + +**Note:** We recommend creating a dedicated namespace by using the `kubectl create ns scalardl-ledger` command and deploying ScalarDL Ledger in the namespace by using the `-n scalardl-ledger` option with the `helm install` command. + +## Step 7. Check the status your ScalarDL Ledger deployment + +After deploying ScalarDL Ledger in your AKS cluster, you must check the status of each component. For details, see [Components to Regularly Check When Running in a Kubernetes Environment](RegularCheck.mdx). + +## Step 8. Monitor your ScalarDL Ledger deployment + +After deploying ScalarDL Ledger in your AKS cluster, we recommend monitoring the deployed components and collecting their logs, especially in production. For details, see [Monitoring Scalar products on a Kubernetes cluster](K8sMonitorGuide.mdx) and [Collecting logs from Scalar products on a Kubernetes cluster](K8sLogCollectionGuide.mdx). + +## Remove ScalarDL Ledger from AKS + +If you want to remove the environment that you created, please remove all the resources in reverse order from which you created them in. diff --git a/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDLOnEKS.mdx b/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDLOnEKS.mdx new file mode 100644 index 00000000..5b4ecadf --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/ManualDeploymentGuideScalarDLOnEKS.mdx @@ -0,0 +1,51 @@ +--- +displayed_sidebar: docsEnglish +--- + +# Deploy ScalarDL Ledger on Amazon Elastic Kubernetes Service (EKS) + +This document explains how to deploy **ScalarDL Ledger** on Amazon Elastic Kubernetes Service (EKS). + +In this guide, you will create the following environment in your AWS environment account. + +![image](images/png/EKS_ScalarDL_Ledger.drawio.png) + +## Step 1. Subscribe to ScalarDL Ledger in AWS Marketplace + +You must get the ScalarDL Ledger container image from [AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-3jdwfmqonx7a2) and subscribe to ScalarDL. For details on how to subscribe to ScalarDL Ledger in AWS Marketplace, see [Subscribe to Scalar products from AWS Marketplace](AwsMarketplaceGuide.mdx#subscribe-to-scalar-products-from-aws-marketplace). + +## Step 2. Create an EKS cluster + +You must create an EKS cluster for the ScalarDL Ledger deployment. For details, see [Guidelines for creating an Amazon EKS cluster for Scalar products](CreateEKSClusterForScalarProducts.mdx). + +## Step 3. Set up a database for ScalarDL Ledger + +You must prepare a database before deploying ScalarDL Ledger. Because ScalarDL Ledger uses ScalarDB internally to access databases, refer to [ScalarDB Supported Databases](https://scalardb.scalar-labs.com/docs/latest/requirements#databases) to see which types of databases ScalarDB supports. + +For details on setting up a database, see [Set up a database for ScalarDB/ScalarDL deployment on AWS](SetupDatabaseForAWS.mdx). + +## Step 4. Create a bastion server + +To execute some tools for deploying and managing ScalarDL Ledger on EKS, you must prepare a bastion server in the same Amazon Virtual Private Cloud (VPC) of the EKS cluster you created in **Step 2**. For details, see [Create a Bastion Server](CreateBastionServer.mdx). + +## Step 5. Prepare custom values files for the Scalar Helm Charts for both ScalarDL Ledger and ScalarDL Schema Loader + +To perform tasks, like accessing information in the database that you created in **Step 3**, you must configure custom values files for the Scalar Helm Charts for both ScalarDL Ledger and ScalarDL Schema Loader (for Ledger) based on your environment. For details, see [Configure a custom values file for Scalar Helm Charts](../helm-charts/configure-custom-values-file.mdx). + +## Step 6. Deploy ScalarDL Ledger by using the Scalar Helm Chart + +Deploy ScalarDL Ledger in your EKS cluster by using the Helm Chart for ScalarDL Ledger. For details, see [Deploy Scalar products using Scalar Helm Charts](../helm-charts/how-to-deploy-scalar-products.mdx). + +**Note:** We recommend creating a dedicated namespace by using the `kubectl create ns scalardl-ledger` command and deploying ScalarDL Ledger in the namespace by using the `-n scalardl-ledger` option with the `helm install` command. + +## Step 7. Check the status of your ScalarDL Ledger deployment + +After deploying ScalarDL Ledger in your EKS cluster, you must check the status of each component. For details, see [Components to Regularly Check When Running in a Kubernetes Environment](RegularCheck.mdx). + +## Step 8. Monitor your ScalarDL Ledger deployment + +After deploying ScalarDL Ledger in your EKS cluster, we recommend monitoring the deployed components and collecting their logs, especially in production. For details, see [Monitoring Scalar products on a Kubernetes cluster](K8sMonitorGuide.mdx) and [Collecting logs from Scalar products on a Kubernetes cluster](K8sLogCollectionGuide.mdx). + +## Remove ScalarDL Ledger from EKS + +If you want to remove the environment that you created, please remove all the resources in reverse order from which you created them in. diff --git a/versioned_docs/version-3.9/scalar-kubernetes/NetworkPeeringForScalarDLAuditor.mdx b/versioned_docs/version-3.9/scalar-kubernetes/NetworkPeeringForScalarDLAuditor.mdx new file mode 100644 index 00000000..fc86fbe9 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/NetworkPeeringForScalarDLAuditor.mdx @@ -0,0 +1,55 @@ +--- +displayed_sidebar: docsEnglish +--- + +# Configure Network Peering for ScalarDL Auditor Mode + +This document explains how to connect multiple private networks for ScalarDL Auditor mode to perform network peering. For ScalarDL Auditor mode to work properly, you must connect ScalarDL Ledger to ScalarDL Auditor. + +## What network you must connect + +To make ScalarDL Auditor mode (Byzantine fault detection) work properly, you must connect three private networks. + +* [ScalarDL Ledger network] ↔ [ScalarDL Auditor network] +* [ScalarDL Ledger network] ↔ [application (client) network] +* [ScalarDL Auditor network] ↔ [application (client) network] + +## Network requirements + +### IP address ranges + +To avoid conflicting IP addresses between the private networks, you must have private networks with different IP address ranges. For example: + +* **Private network for ScalarDL Ledger:** 10.1.0.0/16 +* **Private network for ScalarDL Auditor:** 10.2.0.0/16 +* **Private network for application (client):** 10.3.0.0/16 + +### Connections + +The default network ports for connecting ScalarDL Ledger, ScalarDL Auditor, and the application (client) by default are as follows. You must allow these connections between each private network. + +* **ScalarDL Ledger** + * **50051/TCP:** Accept requests from an application (client) and ScalarDL Auditor via Scalar Envoy. + * **50052/TCP:** Accept privileged requests from an application (client) and ScalarDL Auditor via Scalar Envoy. +* **ScalarDL Auditor** + * **40051/TCP:** Accept requests from an application (client) and ScalarDL Ledger via Scalar Envoy. + * **40052/TCP:** Accept privileged requests from an application (client) and ScalarDL Ledger via Scalar Envoy. +* **Scalar Envoy** (used with ScalarDL Ledger and ScalarDL Auditor) + * **50051/TCP:** Accept requests for ScalarDL Ledger from an application (client) and ScalarDL Auditor. + * **50052/TCP:** Accept privileged requests for ScalarDL Ledger from an application (client) and ScalarDL Auditor. + * **40051/TCP:** Accept requests for ScalarDL Auditor from an application (client) and ScalarDL Ledger. + * **40052/TCP:** Accept privileged requests for ScalarDL Auditor from an application (client) and ScalarDL Ledger. + +Note that, if you change the listening port for ScalarDL in the configuration file (ledger.properties or auditor.properties) from the default, you must allow the connections by using the port that you configured. + +## Private-network peering + +For details on how to connect private networks in each cloud, see official documents. + +### Amazon VPC peering + +For details on how to peer virtual private clouds (VPCs) in an Amazon Web Services (AWS) environment, see the official documentation from Amazon at [Create a VPC peering connection](https://docs.aws.amazon.com/vpc/latest/peering/create-vpc-peering-connection.html). + +### Azure VNet peering + +For details on how to peer virtual networks in an Azure environment, see the official documentation from Microsoft at [Virtual network peering](https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview). diff --git a/versioned_docs/version-3.9/scalar-kubernetes/ProductionChecklistForScalarDBCluster.mdx b/versioned_docs/version-3.9/scalar-kubernetes/ProductionChecklistForScalarDBCluster.mdx new file mode 100644 index 00000000..2a51f739 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/ProductionChecklistForScalarDBCluster.mdx @@ -0,0 +1,153 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Production checklist for ScalarDB Cluster + +This checklist provides recommendations when deploying ScalarDB Cluster in a production environment. + +## Before you begin + +In this checklist, we assume that you are deploying ScalarDB Cluster on a managed Kubernetes cluster, which is recommended. + +## Production checklist: ScalarDB Cluster + +The following is a checklist of recommendations when setting up ScalarDB Cluster in a production environment. + +### Number of pods and Kubernetes worker nodes + +To ensure that the Kubernetes cluster has high availability, you should use at least three worker nodes and deploy at least three pods spread across the worker nodes. You can see the [sample configurations](https://github.com/scalar-labs/scalar-kubernetes/blob/master/conf/scalardb-cluster-custom-values-indirect-mode.yaml) of `podAntiAffinity` for making three pods spread across the worker nodes. + +:::note + +If you place the worker nodes in different availability zones (AZs), you can withstand an AZ failure. + +::: + +### Worker node specifications + +From the perspective of commercial licenses, resources for one pod running ScalarDB Cluster are limited to 2vCPU / 4GB memory. In addition, some pods other than ScalarDB Cluster pods exist on the worker nodes. + +In other words, the following components could run on one worker node: + +* ScalarDB Cluster pod (2vCPU / 4GB) +* Envoy proxy (if you use `indirect` client mode or use a programming language other than Java) +* Your application pods (if you choose to run your application's pods on the same worker node) +* Monitoring components (if you deploy monitoring components such `kube-prometheus-stack`) +* Kubernetes components + +:::note + +You do not need to deploy an Envoy pod when using `direct-kubernetes` mode. + +::: + +With this in mind, you should use a worker node that has at least 4vCPU / 8GB memory resources and use at least three worker nodes for availability, as mentioned in [Number of pods and Kubernetes worker nodes](ProductionChecklistForScalarDBCluster.mdx#number-of-pods-and-kubernetes-worker-nodes). + +However, three nodes with at least 4vCPU / 8GB memory resources per node is the minimum for a production environment. You should also consider the resources of the Kubernetes cluster (for example, the number of worker nodes, vCPUs per node, memories per node, ScalarDB Cluster pods, and pods for your application), which depend on your system's workload. In addition, if you plan to scale the pods automatically by using some features like [Horizontal Pod Autoscaling (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/), you should consider the maximum number of pods on the worker node to decide on the worker node resources. + +### Network + +You should create the Kubernetes cluster on a private network since ScalarDB Cluster does not provide any services to users directly via internet access. We recommend accessing ScalarDB Cluster via a private network from your applications. + +### Monitoring and logging + +You should monitor the deployed components and collect their logs. For details, see [Monitoring Scalar products on a Kubernetes cluster](K8sMonitorGuide.mdx) and [Collecting logs from Scalar products on a Kubernetes cluster](K8sLogCollectionGuide.mdx). + +### Backup and restore + +You should enable the automatic backup feature and point-in-time recovery (PITR) feature in the backend database. For details, see [Set up a database for ScalarDB/ScalarDL deployment](SetupDatabase.mdx). + +## Production checklist: Client applications that access ScalarDB Cluster + +The following is a checklist of recommendations when setting up a client application that accesses ScalarDB Cluster in a production environment. + +### Client mode (Java client library only) + +When using Java for your application, you can use an official Java client library. In this case, you can choose one of the two client modes: [`direct-kubernetes mode`](https://scalardb.scalar-labs.com/docs/latest/scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api#direct-kubernetes-client-mode) or [`indirect mode`](https://scalardb.scalar-labs.com/docs/latest/scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api#indirect-client-mode). + +From the perspective of performance, we recommend using `direct-kubernetes` mode. To use `direct-kubernetes` mode, you must deploy your application pods on the same Kubernetes cluster as ScalarDB Cluster pods. In this case, you don't need to deploy Envoy pods. + +If you can't deploy your Java application pods on the same Kubernetes cluster as ScalarDB Cluster pods for some reason, you must use `indirect` mode. In this case, you must deploy Envoy pods. + +:::note + +The client mode configuration is dedicated to the Java client library. If you use a programming language other than Java for your application (essentially, if you use the [gRPC API](https://scalardb.scalar-labs.com/docs/latest/scalardb-cluster/scalardb-cluster-grpc-api-guide) or [gRPC SQL API](https://scalardb.scalar-labs.com/docs/latest/scalardb-cluster/scalardb-cluster-sql-grpc-api-guide) directly from the programming language), no such configuration exists. In this case, you must deploy Envoy pods. + +::: + +### Transaction manager configuration (Java client library only) + +The client application must always access the database through ScalarDB Cluster. To ensure requests are running properly, check the properties file for your client application and confirm that `scalar.db.transaction_manager=cluster` is configured when using the CRUD API. + +#### Recommended for production environments + +```mermaid +flowchart LR + app["App
ScalarDB Cluster Library with gRPC"] + server["ScalarDB Cluster
ScalarDB Library with
Consensus Commit"] + db[(Underlying storage or database)] + app --> server --> db +``` + +#### Not recommended for production environments (for testing purposes only) + +```mermaid +flowchart LR + app["App
ScalarDB Cluster Library with
Consensus Commit"] + db[(Underlying storage or database)] + app --> db +``` + +### SQL connection configuration (Java client library only) + +The client application must always access the database through ScalarDB Cluster. To ensure requests are running properly, check the properties file for your client application and confirm that `scalar.db.sql.connection_mode=cluster` is configured when using the SQL API. + +#### Recommended for production environments + +```mermaid +flowchart LR + app["App
ScalarDB SQL Library (Cluster mode)"] + server["ScalarDB Cluster
ScalarDB Library with
Consensus Commit"] + db[(Underlying storage or database)] + app --> server --> db +``` + +#### Not recommended for production environments (for testing purposes only) + +```mermaid +flowchart LR + app["App
ScalarDB SQL Library (Direct mode)"] + db[(Underlying storage or database)] + app --> db +``` + +### Deployment of the client application when using `direct-kubernetes` client mode (Java client library only) + +If you use [`direct-kubernetes` client mode](https://scalardb.scalar-labs.com/docs/latest/scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api#direct-kubernetes-client-mode), you must deploy your client application on the same Kubernetes cluster as the ScalarDB Cluster deployment. + +Also, when using `direct-kubernetes` client mode, you must deploy additional Kubernetes resources to make your client application work properly. For details, see [Deploy your client application on Kubernetes with `direct-kubernetes` mode](../helm-charts/how-to-deploy-scalardb-cluster.mdx#deploy-your-client-application-on-kubernetes-with-direct-kubernetes-mode). + +### Transaction handling (Java client library and gRPC API) + +You must make sure that your application always runs [`commit()`](https://scalardb.scalar-labs.com/docs/latest/api-guide#commit-a-transaction) or [`rollback()`](https://scalardb.scalar-labs.com/docs/latest/api-guide#roll-back-or-abort-a-transaction) after you [`begin()`](https://scalardb.scalar-labs.com/docs/latest/api-guide#begin-or-start-a-transaction) a transaction. If the application does not run `commit()` or `rollback()`, your application might experience unexpected issues or read inconsistent data from the backend database. + +:::note + +If you use the [gRPC API](https://scalardb.scalar-labs.com/docs/latest/scalardb-cluster/scalardb-cluster-grpc-api-guide) or [SQL gRPC API](https://scalardb.scalar-labs.com/docs/latest/scalardb-cluster/scalardb-cluster-sql-grpc-api-guide), your application should call a `Commit` or `Rollback` service after you call a `Begin` service to begin a transaction. + +::: + +### Exception handling (Java client library and gRPC API) + +You must make sure that your application handles transaction exceptions. For details, see the document for the API that you are using: + +* [Handle exceptions (Transactional API)](https://scalardb.scalar-labs.com/docs/latest/api-guide#handle-exceptions). +* [Handle exceptions (two-phase commit transactions API)](https://scalardb.scalar-labs.com/docs/latest/two-phase-commit-transactions#handle-exceptions) +* [Execute transactions (ScalarDB SQL API)](https://scalardb.scalar-labs.com/docs/latest/scalardb-sql/sql-api-guide#execute-transactions) +* [Handle SQLException (ScalarDB JDBC)](https://scalardb.scalar-labs.com/docs/latest/scalardb-sql/jdbc-guide#handle-sqlexception) +* [Error handling (ScalarDB Cluster gRPC API)](https://scalardb.scalar-labs.com/docs/latest/scalardb-cluster/scalardb-cluster-grpc-api-guide#error-handling-1) +* [Error handling (ScalarDB Cluster SQL gRPC API)](https://scalardb.scalar-labs.com/docs/latest/scalardb-cluster/scalardb-cluster-sql-grpc-api-guide#error-handling-1) diff --git a/versioned_docs/version-3.9/scalar-kubernetes/ProductionChecklistForScalarDLAuditor.mdx b/versioned_docs/version-3.9/scalar-kubernetes/ProductionChecklistForScalarDLAuditor.mdx new file mode 100644 index 00000000..8ca30ca6 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/ProductionChecklistForScalarDLAuditor.mdx @@ -0,0 +1,170 @@ +--- +displayed_sidebar: docsEnglish +--- + +# Production checklist for ScalarDL Auditor + +This checklist provides recommendations when deploying ScalarDL Auditor in a production environment. + +## Before you begin + +In this checklist, we assume that you are deploying ScalarDL Auditor on a managed Kubernetes cluster, which is recommended. + +## Production checklist: ScalarDL Auditor + +The following is a checklist of recommendations when setting up ScalarDL Auditor in a production environment. + +### ScalarDL availability + +To ensure that the Kubernetes cluster has high availability, you should use at least three worker nodes and deploy at least three pods spread across the worker nodes. You can see the [sample configurations](https://github.com/scalar-labs/scalar-kubernetes/blob/master/conf/scalardl-audit-custom-values.yaml) of `podAntiAffinity` for making three pods spread across the worker nodes. + +:::note + +If you place the worker nodes in different availability zones (AZs), you can withstand an AZ failure. + +::: + +### Resources + +From the perspective of commercial licenses, resources for one pod running ScalarDL Auditor are limited to 2vCPU / 4GB memory. In addition to the ScalarDL Auditor pod, Kubernetes could deploy some of the following components to each worker node: + +* ScalarDL Auditor pod (2vCPU / 4GB) +* Envoy proxy +* Monitoring components (if you deploy monitoring components such as `kube-prometheus-stack`) +* Kubernetes components + +With this in mind, you should use a worker node that has at least 4vCPU / 8GB memory resources and use at least three worker nodes for availability, as mentioned in [ScalarDL availability](#scalardl-availability). + +However, three nodes with at least 4vCPU / 8GB memory resources per node is the minimum environment for production. You should also consider the resources of the Kubernetes cluster (for example, the number of worker nodes, vCPUs per node, memory per node, and ScalarDL Auditor pods), which depend on your system's workload. In addition, if you plan to scale the pods automatically by using some features like [Horizontal Pod Autoscaling (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/), you should consider the maximum number of pods on the worker node when deciding the worker node resources. + +### Network + +You should create the Kubernetes cluster on a private network since ScalarDL Auditor does not provide any services to users directly via internet access. We recommend accessing ScalarDL Auditor via a private network from your applications. + +### Monitoring and logging + +You should monitor the deployed components and collect their logs. For details, see [Monitoring Scalar products on a Kubernetes cluster](K8sMonitorGuide.mdx) and [Collecting logs from Scalar products on a Kubernetes cluster](K8sLogCollectionGuide.mdx). + +### Backup and restore + +You should enable the automatic backup feature and point-in-time recovery (PITR) feature in the backend database. For details, see [Set up a database for ScalarDB/ScalarDL deployment](SetupDatabase.mdx). + +### ScalarDL Auditor deployment + +For Byzantine fault detection in ScalarDL to work properly, do not deploy ScalarDL Auditor pods on the same Kubernetes clusters as the ScalarDL Ledger deployment. Instead, you must deploy ScalarDL Auditor pods in an environment other than the administrative domain (other than the Kubernetes cluster) for the ScalarDL Ledger deployment. + +#### Required for production environments + +```mermaid +graph LR + subgraph "ScalarDL" + subgraph "Administrative domain 1" + subgraph "Kubernetes cluster for Ledger" + B-1[ScalarDL Ledger] + end + end + subgraph "Administrative domain 2" + subgraph "Kubernetes cluster for Auditor" + C-1[ScalarDL Auditor] + end + end + end +``` + +#### Not recommended for production environments (for testing purposes only) + +```mermaid +graph LR + subgraph "Kubernetes cluster" + direction LR + A-1[ScalarDL Ledger] + A-2[ScalarDL Auditor] + end +``` + +### Connection between ScalarDL Ledger and ScalarDL Auditor + +For ScalarDL Auditor mode to work properly, you must allow the connection between ScalarDL Ledger and ScalarDL Auditor. + +```mermaid +graph LR + subgraph "Kubernetes cluster for Ledger" + A-1[ScalarDL Ledger] + end + subgraph "Kubernetes cluster for Auditor" + B-1[ScalarDL Auditor] + end + A-1 --- B-1 +``` + +ScalarDL uses the following ports for the connections between ScalarDL Ledger and ScalarDL Auditor. You must allow these connections between ScalarDL Ledger and ScalarDL Auditor: + +* ScalarDL Ledger + * 50051/TCP + * 50052/TCP +* ScalarDL Auditor + * 40051/TCP + * 40052/TCP + +### Private key and certificate + +When you use PKI for authentication, you must make sure that private keys and certificates that you register to ScalarDL Ledger and ScalaDL Auditor match the following requirements: + +```console +Algorithm : ECDSA +Hash function : SHA256 +Curve parameter : P-256 +``` + +For details, see [How to get a certificate](https://scalardl.scalar-labs.com/docs/latest/ca/caclient-getting-started). + +## Production checklist: Client applications that access ScalarDL Auditor + +The following is a checklist of recommendations when setting up a client application that accesses ScalarDL Auditor in a production environment. + +### Client application deployment + +For Byzantine fault detection in ScalarDL to work properly, do not deploy your application pods on the same Kubernetes clusters as the ScalarDL deployment. Instead, you must deploy your application in an environment other than the administrative domain (other than the Kubernetes cluster) for the ScalarDL deployment. + +#### Required for production environments + +```mermaid +graph LR + subgraph "Administrative domain 1" + subgraph "Another environment" + A-1[User application] + end + end + subgraph "ScalarDL" + subgraph "Administrative domain 2" + subgraph "Kubernetes cluster for Ledger" + B-1[ScalarDL Ledger] + end + end + subgraph "Administrative domain 3" + subgraph "Kubernetes cluster for Auditor" + C-1[ScalarDL Auditor] + end + end + end + A-1 --> B-1 + A-1 --> C-1 +``` + +#### Not recommended for production environments (for testing purposes only) + +```mermaid +graph LR + subgraph "Kubernetes cluster" + direction LR + A-1[User application] + A-2[ScalarDL Ledger] + A-3[ScalarDL Auditor] + end + A-1 --> A-2 + A-1 --> A-3 +``` + +### Client application checklist + +You must also make sure that you satisfy the [Production checklist: Client applications that access ScalarDL Ledger](ProductionChecklistForScalarDLLedger.mdx#production-checklist-client-applications-that-access-scalardl-ledger). diff --git a/versioned_docs/version-3.9/scalar-kubernetes/ProductionChecklistForScalarDLLedger.mdx b/versioned_docs/version-3.9/scalar-kubernetes/ProductionChecklistForScalarDLLedger.mdx new file mode 100644 index 00000000..c83e9cf8 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/ProductionChecklistForScalarDLLedger.mdx @@ -0,0 +1,156 @@ +--- +displayed_sidebar: docsEnglish +--- + +# Production checklist for ScalarDL Ledger + +This checklist provides recommendations when deploying ScalarDL Ledger in a production environment. + +## Before you begin + +In this checklist, we assume that you are deploying ScalarDL Ledger on a managed Kubernetes cluster, which is recommended. + +## Production checklist: ScalarDL Ledger + +The following is a checklist of recommendations when setting up ScalarDL Ledger in a production environment. + +### ScalarDL availability + +To ensure that the Kubernetes cluster has high availability, you should use at least three worker nodes and deploy at least three pods spread across the worker nodes. You can see the [sample configurations](https://github.com/scalar-labs/scalar-kubernetes/blob/master/conf/scalardl-custom-values.yaml) of `podAntiAffinity` for making three pods spread across the worker nodes. + +:::note + +If you place the worker nodes in different availability zones (AZs), you can withstand an AZ failure. + +::: + +### Resources + +From the perspective of commercial licenses, resources for one pod running ScalarDL Ledger are limited to 2vCPU / 4GB memory. In addition to the ScalarDL Ledger pod, Kubernetes could deploy some of the following components to each worker node: + +* ScalarDL Ledger pod (2vCPU / 4GB) +* Envoy proxy +* Monitoring components (if you deploy monitoring components such as `kube-prometheus-stack`) +* Kubernetes components + +With this in mind, you should use a worker node that has at least 4vCPU / 8GB memory resources and use at least three worker nodes for availability, as mentioned in [ScalarDL availability](#scalardl-availability). + +However, three nodes with at least 4vCPU / 8GB memory resources per node is the minimum environment for production. You should also consider the resources of the Kubernetes cluster (for example, the number of worker nodes, vCPUs per node, memory per node, and ScalarDL Ledger pods), which depend on your system's workload. In addition, if you plan to scale the pods automatically by using some features like [Horizontal Pod Autoscaling (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/), you should consider the maximum number of pods on the worker node when deciding the worker node resources. + +### Network + +You should create the Kubernetes cluster on a private network since ScalarDL Ledger does not provide any services to users directly via internet access. We recommend accessing ScalarDL Ledger via a private network from your applications. + +### Monitoring and logging + +You should monitor the deployed components and collect their logs. For details, see [Monitoring Scalar products on a Kubernetes cluster](K8sMonitorGuide.mdx) and [Collecting logs from Scalar products on a Kubernetes cluster](K8sLogCollectionGuide.mdx). + +### Backup and restore + +You should enable the automatic backup feature and point-in-time recovery (PITR) feature in the backend database. For details, see [Set up a database for ScalarDB/ScalarDL deployment](SetupDatabase.mdx). + +## Production checklist: Client applications that access ScalarDL Ledger + +The following is a checklist of recommendations when setting up a client application that accesses ScalarDL Ledger in a production environment. + +### Client application deployment + +For Byzantine fault detection in ScalarDL to work properly, do not deploy your application pods on the same Kubernetes clusters as the ScalarDL Ledger deployment. Instead, you must deploy your application in an environment other than the administrative domain (other than the Kubernetes cluster) for the ScalarDL Ledger deployment. + +#### Required for production environments + +```mermaid +graph LR + subgraph "Administrative domain 1" + subgraph "Another environment" + A-1[User application] + end + end + subgraph "Administrative domain 2" + subgraph "Kubernetes cluster" + B-1[ScalarDL Ledger] + end + end + A-1 --> B-1 +``` + +#### Not recommended for production environments (for testing purposes only) + +```mermaid +graph LR + subgraph "Kubernetes cluster" + direction LR + A-1[User application] --> A-2[ScalarDL Ledger] + end +``` + +### Contract and function + +To check if your contract and function follow the guidelines, see the following: + +* [A Guide on How to Write a Good Contract for ScalarDL](https://scalardl.scalar-labs.com/docs/latest/how-to-write-contract) +* [A Guide on How to Write Function for ScalarDL](https://scalardl.scalar-labs.com/docs/latest/how-to-write-function) + +### Contract versioning + +After you register a contract, you cannot overwrite that existing contract. So, you should consider the versioning of contracts. We recommend one of the following two methods. + +#### Versioning by using `Class Name` + +```console +Contract ID : FooV1 +Binary Name : com.example.contract.FooV1 +Class file (Class Name) : src/main/java/com/example/contract/FooV1.class +--- +Contract ID : FooV2 +Binary Name : com.example.contract.FooV2 +Class file (Class Name) : src/main/java/com/example/contract/FooV2.class +``` + +#### Versioning by using `Package Name` + +```console +Contract ID : FooV3 +Binary Name : com.example.contract.v3.Foo +Class file (Class Name) : src/main/java/com/example/contract/v3/Foo.class +--- +Contract ID : FooV4 +Binary Name : com.example.contract.v4.Foo +Class file (Class Name) : src/main/java/com/example/contract/v4/Foo.class +``` + +### Contract limitations + +If the binary name, package name, and class name are different when you register the contract, you cannot execute that contract after registering it. + +#### Binary name and class name are different (you cannot execute this contract) + +```console +Contract ID : FooV5 +Binary Name : com.example.contract.FooV5 +Class file (Class Name) : src/main/java/com/example/contract/FooV6.class +``` + +#### Binary name and package name are different (you cannot execute this contract) + +```console +Contract ID : FooV7 +Binary Name : com.example.contract.v7.Foo +Class file (Class Name) : src/main/java/com/example/contract/v8/Foo.class +``` + +### Private key and certificate + +When you use PKI for authentication, you must make sure that private keys and certificates that you register to ScalarDL Ledger match the following requirements: + +```console +Algorithm : ECDSA +Hash function : SHA256 +Curve parameter : P-256 +``` + +For details, see [How to get a certificate](https://scalardl.scalar-labs.com/docs/latest/ca/caclient-getting-started). + +### Exception handling + +You must make sure that your application handles exceptions. For details, see [A Guide on How to Handle Errors in ScalarDL](https://scalardl.scalar-labs.com/docs/latest/how-to-handle-errors). diff --git a/versioned_docs/version-3.9/scalar-kubernetes/ProductionChecklistForScalarProducts.mdx b/versioned_docs/version-3.9/scalar-kubernetes/ProductionChecklistForScalarProducts.mdx new file mode 100644 index 00000000..0779599a --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/ProductionChecklistForScalarProducts.mdx @@ -0,0 +1,14 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Production checklist for Scalar products + +To make your deployment ready for production, refer to the following: + +* [Production checklist for ScalarDB Cluster](ProductionChecklistForScalarDBCluster.mdx) +* [Production checklist for ScalarDL Ledger](ProductionChecklistForScalarDLLedger.mdx) +* [Production checklist for ScalarDL Auditor](ProductionChecklistForScalarDLAuditor.mdx) diff --git a/versioned_docs/version-3.9/scalar-kubernetes/RegularCheck.mdx b/versioned_docs/version-3.9/scalar-kubernetes/RegularCheck.mdx new file mode 100644 index 00000000..497546ec --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/RegularCheck.mdx @@ -0,0 +1,95 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Components to Regularly Check When Running in a Kubernetes Environment + +Most of the components deployed by manual deployment guides are self-healing with the help of the managed Kubernetes services and Kubernetes self-healing capability. There are also configured alerts that occur when some unexpected behavior happens. Thus, there shouldn't be so many things to do day by day for the deployment of Scalar products on the managed Kubernetes cluster. However, it is recommended to check the status of a system on a regular basis to see if everything is working fine. Here is the list of things you might want to do on a regular basis. + +## Kubernetes resources + +### Check if Pods are all healthy statues + +Please check the Kubernetes namespaces: + +* `default` (or specified namespace when you deploy Scalar products) for the Scalar product deployment +* `monitoring` for the Prometheus Operator and Loki + +What to check: + +* `STATUS` is all `Running` +* Pods are evenly distributed on the different nodes + +```console +kubectl get pod -o wide -n +``` + +You should see the following output: + +```console +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +scalardb-7876f595bd-2jb28 1/1 Running 0 2m35s 10.244.2.6 k8s-worker2 +scalardb-7876f595bd-rfvk6 1/1 Running 0 2m35s 10.244.1.8 k8s-worker +scalardb-7876f595bd-xfkv4 1/1 Running 0 2m35s 10.244.3.8 k8s-worker3 +scalardb-envoy-84c475f77b-cflkn 1/1 Running 0 2m35s 10.244.1.7 k8s-worker +scalardb-envoy-84c475f77b-tzmc9 1/1 Running 0 2m35s 10.244.3.7 k8s-worker3 +scalardb-envoy-84c475f77b-vztqr 1/1 Running 0 2m35s 10.244.2.5 k8s-worker2 +``` + +```console +kubectl get pod -n monitoring -o wide +``` + +You should see the following output: + +```console +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +alertmanager-scalar-monitoring-kube-pro-alertmanager-0 2/2 Running 1 (11m ago) 12m 10.244.2.4 k8s-worker2 +prometheus-scalar-monitoring-kube-pro-prometheus-0 2/2 Running 0 12m 10.244.1.5 k8s-worker +scalar-logging-loki-0 1/1 Running 0 13m 10.244.2.2 k8s-worker2 +scalar-logging-loki-promtail-2c4k9 0/1 Running 0 13m 10.244.0.5 k8s-control-plane +scalar-logging-loki-promtail-8r48b 1/1 Running 0 13m 10.244.3.2 k8s-worker3 +scalar-logging-loki-promtail-b26c6 1/1 Running 0 13m 10.244.2.3 k8s-worker2 +scalar-logging-loki-promtail-sks56 1/1 Running 0 13m 10.244.1.2 k8s-worker +scalar-monitoring-grafana-77c4dbdd85-4mrn7 3/3 Running 0 12m 10.244.3.4 k8s-worker3 +scalar-monitoring-kube-pro-operator-7575dd8bbd-bxhrc 1/1 Running 0 12m 10.244.1.3 k8s-worker +``` + +### Check if Nodes are all healthy statuses + +What to check: + +* `STATUS` is all `Ready` + +```console +kubectl get nodes +``` + +You should see the following output: + +```console +NAME STATUS ROLES AGE VERSION +k8s-control-plane Ready control-plane 16m v1.25.3 +k8s-worker Ready 15m v1.25.3 +k8s-worker2 Ready 15m v1.25.3 +k8s-worker3 Ready 15m v1.25.3 +``` + +## Prometheus dashboard (Alerts of Scalar products) + +Access to the Prometheus dashboard according to the document [Monitoring Scalar products on the Kubernetes cluster](K8sMonitorGuide.mdx). In the **Alerts** tab, you can see the alert status. + +What to check: + +* All alerts are **green (Inactive)** + +If some issue is occurring, it shows you **red (Firing)** status. + +## Grafana dashboard (metrics of Scalar products) + +Access to the Grafana dashboard according to the document [Monitoring Scalar products on the Kubernetes cluster](K8sMonitorGuide.mdx). In the **Dashboards** tab, you can see the dashboard of Scalar products. In these dashboards, you can see some metrics of Scalar products. + +Those dashboards cannot address issues directly, but you can see changes from normal (e.g., increasing transaction errors) to get hints for investigating issues. diff --git a/versioned_docs/version-3.9/scalar-kubernetes/RestoreDatabase.mdx b/versioned_docs/version-3.9/scalar-kubernetes/RestoreDatabase.mdx new file mode 100644 index 00000000..61a615e3 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/RestoreDatabase.mdx @@ -0,0 +1,160 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Restore databases in a Kubernetes environment + +This guide explains how to restore databases that ScalarDB or ScalarDL uses in a Kubernetes environment. Please note that this guide assumes that you are using a managed database from a cloud services provider as the backend database for ScalarDB or ScalarDL. + +## Procedure to restore databases + +1. Scale in ScalarDB or ScalarDL pods to **0** to stop requests to the backend databases. You can scale in the pods to **0** by using the `--set *.replicaCount=0` flag in the helm command. + * ScalarDB Server + ```console + helm upgrade scalar-labs/scalardb -n -f /path/to/ --set scalardb.replicaCount=0 + ``` + * ScalarDL Ledger + ```console + helm upgrade scalar-labs/scalardl -n -f /path/to/ --set ledger.replicaCount=0 + ``` + * ScalarDL Auditor + ```console + helm upgrade scalar-labs/scalardl-audit -n -f /path/to/ --set auditor.replicaCount=0 + ``` +2. Restore the databases by using the point-in-time recovery (PITR) feature. + + For details on how to restore the databases based on your managed database, please refer to the [Supplemental procedures to restore databases based on managed database](RestoreDatabase.mdx#supplemental-procedures-to-restore-databases-based-on-managed-database) section in this guide. + + If you are using NoSQL or multiple databases, you should specify the middle point of the pause duration period that you created when following the backup procedure in [Back up a NoSQL database in a Kubernetes environment](BackupNoSQL.mdx). +3. Update **database.properties**, **ledger.properties**, or **auditor.properties** based on the newly restored database. + + Because the PITR feature restores databases as another instance, you must update the endpoint information in the custom values file of ScalarDB or ScalarDL to access the newly restored databases. For details on how to configure the custom values file, see [Configure a custom values file for Scalar Helm Charts](../helm-charts/configure-custom-values-file.mdx). + + Please note that, if you are using Amazon DynamoDB, your data will be restored with another table name instead of another instance. In other words, the endpoint will not change after restoring the data. Instead, you will need to restore the data by renaming the tables in Amazon DynamoDB. For details on how to restore data with the same table name, please see the [Amazon DynamoDB](RestoreDatabase.mdx#amazon-dynamodb) section in this guide. +4. Scale out the ScalarDB or ScalarDL pods to **1** or more to start accepting requests from clients by using the `--set *.replicaCount=N` flag in the helm command. + * ScalarDB Server + ```console + helm upgrade scalar-labs/scalardb -n -f /path/to/ --set scalardb.replicaCount=3 + ``` + * ScalarDL Ledger + ```console + helm upgrade scalar-labs/scalardl -n -f /path/to/ --set ledger.replicaCount=3 + ``` + * ScalarDL Auditor + ```console + helm upgrade scalar-labs/scalardl-audit -n -f /path/to/ --set auditor.replicaCount=3 + ``` + +## Supplemental procedures to restore data based on managed database + +### Amazon DynamoDB + +When using the PITR feature, Amazon DynamoDB restores data with another table name. Therefore, you must follow additional steps to restore data with the same table name. + +#### Steps + +1. Create a backup. + 1. Select the middle point of the pause duration period as the restore point. + 2. Use PITR to restore table A to table B. + 3. Perform a backup of the restored table B. Then, confirm the backup is named appropriately for backup B. + 4. Remove table B. + + For details on how to restore DynamoDB tables by using PITR and how to perform a backup of DynamoDB tables manually, see the following official documentation from Amazon: + + * [Restoring a DynamoDB table to a point in time](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/PointInTimeRecovery.Tutorial.html) + * [Backing up a DynamoDB table](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Backup.Tutorial.html) + + You can do this **Create a backup** step as a part of backup operations in the [Back up a NoSQL database in a Kubernetes environment](BackupNoSQL.mdx#create-a-period-to-restore-data-and-perform-a-backup). + +2. Restore from the backup. + 1. Remove table A. + 2. Create a table named A by using backup B. + +3. Update the table configuration if necessary, depending on your environment. + + Some configurations, like autoscaling policies, are not set after restoring, so you may need to manually set those configurations depending on your needs. For details, see the official documentation from Amazon at [Backing up and restoring DynamoDB tables with DynamoDB: How it works](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/CreateBackup.html). + + For example, if you are using ScalarDB Schema Loader or ScalarDL Schema Loader to create tables, autoscaling is enabled by default. Therefore, you will need to manually enable autoscaling for the restored tables in DynamoDB. For details on how to enable autoscaling in DynamoDB, see the official documentation from Amazon at [Enabling DynamoDB auto scaling on existing tables](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.Console.html#AutoScaling.Console.ExistingTable). + + In addition, after restoring the databases, the PITR feature will be disabled and the read/write capacity mode is reset to the default value. If necessary, depending on your environment, you will need to manually set these configurations. For some configurations for restored tables, see [Set up a database for ScalarDB/ScalarDL deployment on AWS (Amazon DynamoDB)](SetupDatabaseForAWS.mdx#amazon-dynamodb). + +### Azure Cosmos DB for NoSQL + +When using the PITR feature, Azure Cosmos DB restores data by using another account. Therefore, you must update the endpoint configuration in the custom values file. + +#### Steps + +1. Restore the account. For details on how to restore an Azure Cosmos DB account by using PITR, see [Restore an Azure Cosmos DB account that uses continuous backup mode](https://learn.microsoft.com/en-us/azure/cosmos-db/restore-account-continuous-backup). + +2. Change the **default consistency level** for the restored account from the default value to **Strong**. For details on how to change this value, see the official documentation from Microsoft a [Configure the default consistency level](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-manage-consistency#configure-the-default-consistency-level). + +3. Update **database.properties** for ScalarDB Schema Loader or ScalarDL Schema Loader based on the newly restored account. + + ScalarDB implements the Cosmos DB adapter by using its stored procedures, which are installed when creating schemas by using ScalarDB Schema Loader or ScalarDL Schema Loader. However, the PITR feature in Cosmos DB does not restore stored procedures, so you will need to reinstall the required stored procedures for all tables after restoration. You can reinstall the required stored procedures by using the `--repair-all` option in ScalarDB Schema Loader or ScalarDL Schema Loader. + * **ScalarDB tables:** For details on how to configure **database.properties** for ScalarDB Schema Loader, see [Configure ScalarDB for Cosmos DB for NoSQL](https://scalardb.scalar-labs.com/docs/latest/getting-started-with-scalardb#set-up-your-database-for-scalardb). + + * **ScalarDL tables:** For details on how to configure the custom values file for ScalarDL Schema Loader, see [Configure a custom values file for ScalarDL Schema Loader](../helm-charts/configure-custom-values-scalardl-schema-loader.mdx). + +4. Re-create the stored procedures by using the `--repair-all` flag in ScalarDB Schema Loader or ScalarDL Schema Loader as follows: + + * ScalarDB tables + ```console + java -jar scalardb-schema-loader-.jar --config /path/to/ -f /path/to/ [--coordinator] --repair-all + ``` + * ScalarDL Ledger tables + ```console + helm install repair-schema-ledger scalar-labs/schema-loading -n -f /path/to/ --set "schemaLoading.commandArgs={--repair-all}" + ``` + * ScalarDL Auditor tables + ```console + helm install repair-schema-auditor scalar-labs/schema-loading -n -f /path/to/ --set "schemaLoading.commandArgs={--repair-all}" + ``` + + For more details on repairing tables in ScalarDB Schema Loader, see [Repair tables](https://scalardb.scalar-labs.com/docs/latest/schema-loader#repair-tables). + +5. Update the table configuration if necessary, depending on your environment. For some configurations for restored accounts, see [Set up a database for ScalarDB/ScalarDL deployment on Azure (Azure Cosmos DB for NoSQL)](SetupDatabaseForAzure.mdx#azure-cosmos-db-for-nosql). + +### Amazon RDS + +When using the PITR feature, Amazon RDS restores data by using another database instance. Therefore, you must update the endpoint configuration in the custom values file. + +#### Steps + +1. Restore the database instance. For details on how to restore the Amazon RDS instance by using PITR, see the following official documentation from Amazon: + * [Restoring a DB instance to a specified time](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIT.html) + * [Restoring a Multi-AZ DB cluster to a specified time](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIT.MultiAZDBCluster.html) + +2. Update the table configuration if necessary, depending on your environment. For some configurations for the restored database instance, see [Set up a database for ScalarDB/ScalarDL deployment on AWS (Amazon RDS for MySQL, PostgreSQL, Oracle, and SQL Server)](SetupDatabaseForAWS.mdx#amazon-rds-for-mysql-postgresql-oracle-and-sql-server). + +### Amazon Aurora + +When using the PITR feature, Amazon Aurora restores data by using another database cluster. Therefore, you must update the endpoint configuration in the custom values file. + +#### Steps + +1. Restore the database cluster. For details on how to restore an Amazon Aurora cluster by using PITR. see the official documentation from Amazon at [Restoring a DB cluster to a specified time](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-pitr.html). + +2. Add a replica (reader) to make the database cluster a Multi-AZ cluster if necessary, depending on your environment. + + The PITR feature in Amazon Aurora cannot restore a database cluster by using a Multi-AZ configuration. If you want to restore the database cluster as a Multi-AZ cluster, you must add a reader after restoring the database cluster. For details on how to add a reader, see the official documentation from Amazon at [Adding Aurora Replicas to a DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-replicas-adding.html). + +3. Update the table configuration if necessary, depending on your environment. For some configurations for the restored database cluster, see [Set up a database for ScalarDB/ScalarDL deployment on AWS (Amazon Aurora MySQL and Amazon Aurora PostgreSQL)](SetupDatabaseForAWS.mdx#amazon-aurora-mysql-and-amazon-aurora-postgresql). + +### Azure Database for MySQL/PostgreSQL + +When using the PITR feature, Azure Database for MySQL/PostgreSQL restores data by using another server. Therefore, you must update the endpoint configuration in the custom values file. + +#### Steps + +1. Restore the database server. For details on how to restore an Azure Database for MySQL/PostgreSQL server by using PITR, see the following: + + * [Point-in-time restore of a Azure Database for MySQL Flexible Server using Azure portal](https://learn.microsoft.com/en-us/azure/mysql/flexible-server/how-to-restore-server-portal) + * [Backup and restore in Azure Database for PostgreSQL - Flexible Server](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-backup-restore) + +2. Update the table configuration if necessary, depending on your environment. For some configurations for the restored database server, see the following: + + * [Set up a database for ScalarDB/ScalarDL deployment on Azure (Azure Database for MySQL)](SetupDatabaseForAzure.mdx#azure-database-for-mysql) + * [Set up a database for ScalarDB/ScalarDL deployment on Azure (Azure Database for PostgreSQL)](SetupDatabaseForAzure.mdx#azure-database-for-postgresql) diff --git a/versioned_docs/version-3.9/scalar-kubernetes/SetupDatabase.mdx b/versioned_docs/version-3.9/scalar-kubernetes/SetupDatabase.mdx new file mode 100644 index 00000000..3ffe0967 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/SetupDatabase.mdx @@ -0,0 +1,13 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Set up a database for ScalarDB/ScalarDL deployment + +This guide explains how to set up a database for ScalarDB/ScalarDL deployment on cloud services. + +* [Set up a database for ScalarDB/ScalarDL deployment on AWS](SetupDatabaseForAWS.mdx) +* [Set up a database for ScalarDB/ScalarDL deployment on Azure](SetupDatabaseForAzure.mdx) diff --git a/versioned_docs/version-3.9/scalar-kubernetes/SetupDatabaseForAWS.mdx b/versioned_docs/version-3.9/scalar-kubernetes/SetupDatabaseForAWS.mdx new file mode 100644 index 00000000..78c817fd --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/SetupDatabaseForAWS.mdx @@ -0,0 +1,182 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Set up a database for ScalarDB/ScalarDL deployment on AWS + +This guide explains how to set up a database for ScalarDB/ScalarDL deployment on AWS. + +## Amazon DynamoDB + +### Authentication method + +When you use DynamoDB, you must set `REGION`, `ACCESS_KEY_ID`, and `SECRET_ACCESS_KEY` in the ScalarDB/ScalarDL properties file as follows. + +```properties +scalar.db.contact_points= +scalar.db.username= +scalar.db.password= +scalar.db.storage=dynamo +``` + +Please refer to the following document for more details on the properties for DynamoDB. + +* [Configure ScalarDB for DynamoDB](https://scalardb.scalar-labs.com/docs/latest/getting-started-with-scalardb#configure-scalardb-2) + +### Required configuration/steps + +DynamoDB is available for use in AWS by default. You do not need to set up anything manually to use it. + +### Optional configurations/steps + +#### Enable point-in-time recovery (Recommended in the production environment) + +You can enable PITR as a backup/restore method for DynamoDB. If you use [ScalarDB Schema Loader](https://scalardb.scalar-labs.com/docs/latest/schema-loader) for creating schema, it enables the PITR feature for tables by default. Please refer to the official document for more details. + +* [Point-in-time recovery for DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/PointInTimeRecovery.html) + +It is recommended since the point-in-time recovery feature automatically and continuously takes backups so that you can reduce downtime (pause duration) for backup operations. Please refer to the following document for more details on how to backup/restore Scalar product data. + +* [Backup restore guide for Scalar products](BackupRestoreGuide.mdx) + +#### Configure monitoring (Recommended in the production environment) + +You can configure the monitoring and logging of DynamoDB using its native feature. Please refer to the official document for more details. + +* [Monitoring and logging](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/monitoring.html) + +It is recommended since the metrics and logs help you to investigate some issues in the production environment when they happen. + +#### Use VPC endpoint (Recommended in the production environment) + +// Note that We have not yet tested this feature with Scalar products. +// TODO: We need to test this feature with Scalar products. + +* [Using Amazon VPC endpoints to access DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints-dynamodb.html) + +It is recommended since the private internal connections not via WAN can make a system more secure. + +#### Configure Read/Write Capacity (Optional based on your environment) + +You can configure the **Read/Write Capacity** of DynamoDB tables based on your requirements. Please refer to the official document for more details on Read/Write Capacity. + +* [Read/write capacity mode](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html) + +You can configure Read/Write Capacity using ScalarDB/DL Schema Loader when you create a table. Please refer to the following document for more details on how to configure Read/Write Capacity (RU) using ScalarDB/DL Schema Loader. + +* [ScalarDB Schema Loader](https://scalardb.scalar-labs.com/docs/latest/schema-loader) + +## Amazon RDS for MySQL, PostgreSQL, Oracle, and SQL Server + +### Authentication method + +When you use RDS, you must set `JDBC_URL`, `USERNAME`, and `PASSWORD` in the ScalarDB/ScalarDL properties file as follows. + +```properties +scalar.db.contact_points= +scalar.db.username= +scalar.db.password= +scalar.db.storage=jdbc +``` + +Please refer to the following document for more details on the properties for RDS (JDBC databases). + +* [Configure ScalarDB for JDBC databases](https://scalardb.scalar-labs.com/docs/latest/getting-started-with-scalardb#set-up-your-database-for-scalardb) + +### Required configuration/steps + +#### Create an RDS database instance + +You must create an RDS database instance. Please refer to the official document for more details. + +* [Configuring an Amazon RDS DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_RDS_Configuring.html) + +### Optional configurations/steps + +#### Enable automated backups (Recommended in the production environment) + +You can enable automated backups. Please refer to the official document for more details. + +* [Working with backups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html) + +It is recommended since the automated backups feature enables a point-in-time recovery feature. It can recover data to a specific point in time. It can reduce downtime (pause duration) for backup operations when you use multi databases under Scalar products. Please refer to the following document for more details on how to backup/restore the Scalar product data. + +* [Backup restore guide for Scalar products](BackupRestoreGuide.mdx) + +#### Configure monitoring (Recommended in the production environment) + +You can configure the monitoring and logging of RDS using its native feature. Please refer to the official documents for more details. + +* [Monitoring metrics in an Amazon RDS instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Monitoring.html) +* [Monitoring events, logs, and streams in an Amazon RDS DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Monitor_Logs_Events.html) + +It is recommended since the metrics and logs help you to investigate some issues in the production environment when they happen. + +#### Disable public access (Recommended in the production environment) + +Public access is disabled by default. You can access the RDS database instance from the Scalar product pods on your EKS cluster as follows. + +* Create the RDS database instance on the same VPC as your EKS cluster. +* Connect the VPC for the RDS and the VPC for the EKS cluster for the Scalar product deployment using [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html). (// TODO: We need to test this feature with Scalar products.) + +It is recommended since the private internal connections not via WAN can make a system more secure. + +## Amazon Aurora MySQL and Amazon Aurora PostgreSQL + +### Authentication method + +When you use Amazon Aurora, you must set `JDBC_URL`, `USERNAME`, and `PASSWORD` in the ScalarDB/ScalarDL properties file as follows. + +```properties +scalar.db.contact_points= +scalar.db.username= +scalar.db.password= +scalar.db.storage=jdbc +``` + +Please refer to the following document for more details on the properties for Amazon Aurora (JDBC databases). + +* [Configure ScalarDB for JDBC databases](https://scalardb.scalar-labs.com/docs/latest/getting-started-with-scalardb#set-up-your-database-for-scalardb) + +### Required configuration/steps + +#### Create an Amazon Aurora DB cluster + +You must create an Amazon Aurora DB cluster. Please refer to the official document for more details. + +* [Configuring your Amazon Aurora DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraSettingUp.html) + +### Optional configurations/steps + +#### Configure backup configurations (Optional based on your environment) + +Amazon Aurora automatically gets a backup by default. You do not need to enable the backup feature manually. + +If you want to change some backup configurations like the backup retention period and backup window, you can configure them. Please refer to the official document for more details. + +* [Backing up and restoring an Amazon Aurora DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/BackupRestoreAurora.html) + +Please refer to the following document for more details on how to backup/restore the Scalar product data. + +* [Backup restore guide for Scalar products](BackupRestoreGuide.mdx) + +#### Configure monitoring (Recommended in the production environment) + +You can configure the monitoring and logging of Amazon Aurora using its native feature. Please refer to the official documents for more details. + +* [Monitoring metrics in an Amazon Aurora cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/MonitoringAurora.html) +* [Monitoring events, logs, and streams in an Amazon Aurora DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_Monitor_Logs_Events.html) + +It is recommended since the metrics and logs help you to investigate some issues in the production environment when they happen. + +#### Disable public access (Recommended in the production environment) + +Public access is disabled by default. You can access the Amazon Aurora DB cluster from the Scalar product pods on your EKS cluster as follows. + +* Create the Amazon Aurora DB cluster on the same VPC as your EKS cluster. +* Connect the VPC for the Amazon Aurora DB cluster and the VPC for the EKS cluster for the Scalar product deployment using [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html). (// TODO: We need to test this feature with Scalar products.) + +It is recommended since the private internal connections not via WAN can make a system more secure. diff --git a/versioned_docs/version-3.9/scalar-kubernetes/SetupDatabaseForAzure.mdx b/versioned_docs/version-3.9/scalar-kubernetes/SetupDatabaseForAzure.mdx new file mode 100644 index 00000000..01ee90ad --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/SetupDatabaseForAzure.mdx @@ -0,0 +1,206 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Set up a database for ScalarDB/ScalarDL deployment on Azure + +This guide explains how to set up a database for ScalarDB/ScalarDL deployment on Azure. + +## Azure Cosmos DB for NoSQL + +### Authentication method + +When you use Cosmos DB for NoSQL, you must set `COSMOS_DB_URI` and `COSMOS_DB_KEY` in the ScalarDB/ScalarDL properties file as follows. + +```properties +scalar.db.contact_points= +scalar.db.password= +scalar.db.storage=cosmos +``` + +Please refer to the following document for more details on the properties for Cosmos DB for NoSQL. + +* [Configure ScalarDB for Cosmos DB for NoSQL](https://scalardb.scalar-labs.com/docs/latest/getting-started-with-scalardb#set-up-your-database-for-scalardb) + +### Required configuration/steps + +#### Create an Azure Cosmos DB account + +You must create an Azure Cosmos DB account with the NoSQL (core) API. You must set the **Capacity mode** as **Provisioned throughput** when you create it. Please refer to the official document for more details. + +* [Quickstart: Create an Azure Cosmos DB account, database, container, and items from the Azure portal](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/quickstart-portal) + +#### Configure a default consistency configuration + +You must set the **Default consistency level** as **Strong**. Please refer to the official document for more details. + +* [Configure the default consistency level](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-manage-consistency#config/ure-the-default-consistency-level) + +### Optional configurations/steps + +#### Configure backup configurations (Recommended in the production environment) + +You can configure **Backup modes** as **Continuous backup mode** for PITR. Please refer to the official document for more details. + +* [Backup modes](https://learn.microsoft.com/en-us/azure/cosmos-db/online-backup-and-restore#backup-modes) + +It is recommended since the continuous backup mode automatically and continuously gets backups so that we can reduce downtime (pause duration) for backup operations. Please refer to the following document for more details on how to backup/restore the Scalar product data. + +* [Backup restore guide for Scalar products](BackupRestoreGuide.mdx) + +#### Configure monitoring (Recommended in the production environment) + +You can configure the monitoring of Cosmos DB using its native feature. Please refer to the official document for more details. + +* [Monitor Azure Cosmos DB](https://learn.microsoft.com/en-us/azure/cosmos-db/monitor) + +It is recommended since the metrics and logs help you to investigate some issues in the production environment when they happen. + +#### Enable service endpoint (Recommended in the production environment) + +You can configure the Azure Cosmos DB account to allow access only from a specific subnet of a virtual network (VNet). Please refer to the official document for more details. + +* [Configure access to Azure Cosmos DB from virtual networks (VNet)](https://learn.microsoft.com/en-us/azure/cosmos-db/how-to-configure-vnet-service-endpoint) + +It is recommended since the private internal connections not via WAN can make a system more secure. + +#### Configure the Request Units (Optional based on your environment) + +You can configure the **Request Units** of Cosmos DB based on your requirements. Please refer to the official document for more details on the request units. + +* [Request Units in Azure Cosmos DB](https://learn.microsoft.com/en-us/azure/cosmos-db/request-units) + +You can configure Request Units using ScalarDB/DL Schema Loader when you create a table. Please refer to the following document for more details on how to configure Request Units (RU) using ScalarDB/DL Schema Loader. + +* [ScalarDB Schema Loader](https://scalardb.scalar-labs.com/docs/latest/schema-loader) + +## Azure Database for MySQL + +### Authentication method + +When you use Azure Database for MySQL, you must set `JDBC_URL`, `USERNAME`, and `PASSWORD` in the ScalarDB/ScalarDL properties file as follows. + +```properties +scalar.db.contact_points= +scalar.db.username= +scalar.db.password= +scalar.db.storage=jdbc +``` + +Please refer to the following document for more details on the properties for Azure Database for MySQL (JDBC databases). + +* [Configure ScalarDB for JDBC databases](https://scalardb.scalar-labs.com/docs/latest/getting-started-with-scalardb#set-up-your-database-for-scalardb) + +### Required configuration/steps + +#### Create a database server + +You must create a database server. Please refer to the official document for more details. + +* [Quickstart: Use the Azure portal to create an Azure Database for MySQL Flexible Server](https://learn.microsoft.com/en-us/azure/mysql/flexible-server/quickstart-create-server-portal) + +You can choose **Single Server** or **Flexible Server** for your deployment. However, Flexible Server is recommended in Azure. This document assumes that you use Flexible Server. Please refer to the official documents for more details on the deployment models. + +* [What is Azure Database for MySQL?](https://learn.microsoft.com/en-us/azure/mysql/single-server/overview#deployment-models) + +### Optional configurations/steps + +#### Configure backup configurations (Optional based on your environment) + +Azure Database for MySQL gets a backup by default. You do not need to enable the backup feature manually. + +If you want to change some backup configurations like the backup retention period, you can configure it. Please refer to the official document for more details. + +* [Backup and restore in Azure Database for MySQL Flexible Server](https://learn.microsoft.com/en-us/azure/mysql/flexible-server/concepts-backup-restore) + +Please refer to the following document for more details on how to backup/restore the Scalar product data. + +* [Backup restore guide for Scalar products](BackupRestoreGuide.mdx) + +#### Configure monitoring (Recommended in the production environment) + +You can configure the monitoring of Azure Database for MySQL using its native feature. Please refer to the official document for more details. + +* [Monitor Azure Database for MySQL Flexible Server](https://learn.microsoft.com/en-us/azure/mysql/flexible-server/concepts-monitoring) + +It is recommended since the metrics and logs help you to investigate some issues in the production environment when they happen. + +#### Disable public access (Recommended in the production environment) + +You can configure **Private access (VNet Integration)** as a **Connectivity method**. Please refer to the official document for more details. + +* [Connectivity and networking concepts for Azure Database for MySQL - Flexible Server](https://learn.microsoft.com/en-us/azure/mysql/flexible-server/concepts-networking) + +You can access the database server from the Scalar product pods on your AKS cluster as follows. + +* Create the database server on the same VNet as your AKS cluster. +* Connect the VNet for the database server and the VNet for the AKS cluster for the Scalar product deployment using [Virtual network peering](https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview). (// TODO: We need to test this feature with Scalar products.) + +It is recommended since the private internal connections not via WAN can make a system more secure. + +## Azure Database for PostgreSQL + +### Authentication method + +When you use Azure Database for PostgreSQL, you must set `JDBC_URL`, `USERNAME`, and `PASSWORD` in the ScalarDB/ScalarDL properties file as follows. + +```properties +scalar.db.contact_points= +scalar.db.username= +scalar.db.password= +scalar.db.storage=jdbc +``` + +Please refer to the following document for more details on the properties for Azure Database for PostgreSQL (JDBC databases). + +* [Configure ScalarDB for JDBC databases](https://scalardb.scalar-labs.com/docs/latest/getting-started-with-scalardb#set-up-your-database-for-scalardb) + +### Required configuration/steps + +#### Create a database server + +You must create a database server. Please refer to the official document for more details. + +* [Quickstart: Create an Azure Database for PostgreSQL - Flexible Server in the Azure portal](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/quickstart-create-server-portal) + +You can choose **Single Server** or **Flexible Server** for your deployment. However, Flexible Server is recommended in Azure. This document assumes that you use Flexible Server. Please refer to the official documents for more details on the deployment models. + +* [What is Azure Database for PostgreSQL?](https://learn.microsoft.com/en-us/azure/postgresql/single-server/overview#deployment-models) + +### Optional configurations/steps + +#### Configure backup configurations (Optional based on your environment) + +Azure Database for PostgreSQL gets a backup by default. You do not need to enable the backup feature manually. + +If you want to change some backup configurations like the backup retention period, you can configure it. Please refer to the official document for more details. + +* [Backup and restore in Azure Database for PostgreSQL - Flexible Server](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-backup-restore) + +Please refer to the following document for more details on how to backup/restore the Scalar product data. + +* [Backup restore guide for Scalar products](BackupRestoreGuide.mdx) + +#### Configure monitoring (Recommended in the production environment) + +You can configure the monitoring of Azure Database for PostgreSQL using its native feature. Please refer to the official document for more details. + +* [Monitor metrics on Azure Database for PostgreSQL - Flexible Server](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-monitoring) + +It is recommended since the metrics and logs help you to investigate some issues in the production environment when they happen. + +#### Disable public access (Recommended in the production environment) + +You can configure **Private access (VNet Integration)** as a **Connectivity method**. Please refer to the official document for more details. + +* [Networking overview for Azure Database for PostgreSQL - Flexible Server](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-networking) + +You can access the database server from the Scalar product pods on your AKS cluster as follows. + +* Create the database server on the same VNet as your AKS cluster. +* Connect the VNet for the database server and the VNet for the AKS cluster for the Scalar product deployment using [Virtual network peering](https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview). (// TODO: We need to test this feature with Scalar products.) + +It is recommended since the private internal connections not via WAN can make a system more secure. diff --git a/versioned_docs/version-3.9/scalar-kubernetes/alerts/Envoy.mdx b/versioned_docs/version-3.9/scalar-kubernetes/alerts/Envoy.mdx new file mode 100644 index 00000000..95183274 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/alerts/Envoy.mdx @@ -0,0 +1,153 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Envoy Alerts + +## EnvoyClusterDown + +This is the most critical alert and indicates that an Envoy cluster is not able to process requests. This alert should be handled with the highest priority. + +### Example Alert + +#### Firing + +``` +[FIRING:1] EnvoyClusterDown - critical +Alert: Envoy cluster is down - critical + Description: Envoy cluster is down, no resquest can be process + Details: + • alertname: EnvoyClusterDown + • deployment: prod-scalardl-envoy +``` + +#### Resolved + +``` +[RESOLVED] EnvoyClusterDown - critical +Alert: Envoy cluster is down - critical + Description: Envoy cluster is down, no resquest can be process + Details: + • alertname: EnvoyClusterDown + • deployment: prod-scalardl-envoy +``` + +### Action Needed + +* Check the number of replicas set `kubectl get deployments. prod-scalardl-envoy` +* Check the number of replicas set `kubectl describe deployments. prod-scalardl-envoy` +* Check nodes statuses with `kubectl get node -o wide` +* Check the log server to pinpoint the root cause of a failure with kubernetes logs on the monitor server `/log/kubernetes//-/kube.log` +* Check a cloud provider to see if there is any known issue. For example, you can check statues [here](https://status.azure.com/en-us/status) in Azure. + +## EnvoyClusterDegraded + +This alert lets you know if a kubernetes cluster cannot start envoy pods, which means that the cluster does not have enough resource or lost of one or many kubernetes nodes to run the deployment. + +### Example Alert + +#### Firing + +``` +[FIRING:1] EnvoyClusterDegraded - warning +Alert: Envoy cluster is running in a degraded mode - warning + Description: Envoy cluster is running in a degraded mode, some of the Envoy pods are not healthy + Details: + • alertname: EnvoyClusterDegraded + • deployment: prod-scalardl-envoy +``` + +#### Resolved + +``` +[RESOLVED] EnvoyClusterDegraded - warning +Alert: Envoy cluster is running in a degraded mode - warning + Description: Envoy cluster is running in a degraded mode, some of the Envoy pods are not healthy + Details: + • alertname: EnvoyClusterDegraded + • deployment: prod-scalardl-envoy +``` + +### Action Needed + +* Check the log server to pinpoint the root cause of a failure with kubernetes logs on the monitor server `/log/kubernetes//-/kube.log` or `kubectl logs prod-scalardl-envoy-xxxx-yyyy` +* Check kubernetes deployment with `kubectl describe deployments prod-scalardl-envoy` +* Check replica set with `kubectl get replicasets.apps` +* Check nodes statuses with `kubectl get node -o wide` +* Check a cloud provider to see if there is any known issue. For example, you can check statues [here](https://status.azure.com/en-us/status) in Azure. + +## EnvoyPodsPending + +This alert lets you know if a kubernetes cluster cannot start envoy pods, which means that the cluster does not have the enough resource. + +### Example Alert + +#### Firing + +``` +[FIRING:1] EnvoyPodsPending - warning +Alert: Pod prod-scalardl-envoy-xxxx-yyyy in namespace default in pending status - warning + Description: Pod prod-scalardl-envoy-xxxx-yyyy in namespace default has been in pending status for more than 1 minute. + Details: + • alertname: EnvoyPodsPending + • deployment: prod-scalardl-envoy +``` + +#### Resolved + +``` +[RESOLVED:1] EnvoyPodsPending - warning +Alert: Pod prod-scalardl-envoy-xxxx-yyyy in namespace default in pending status - warning + Description: Pod prod-scalardl-envoy-xxxx-yyyy in namespace default has been in pending status for more than 1 minute. + Details: + • alertname: EnvoyPodsPending + • deployment: prod-scalardl-envoy +``` + +### Action Needed + +* Check log server to pinpoint the root cause of a failure with kubernetes logs on the monitor server `/log/kube//*.log` +* Check a kubernetes deployment with `kubectl describe pod prod-scalardl-envoy-xxxx-yyyy` + +## EnvoyPodsError + +This alert lets you know if a kubernetes cluster cannot start envoy pods for one of the following reasons: + +* CrashLoopBackOff +* CreateContainerConfigError +* CreateContainerError +* ErrImagePull +* ImagePullBackOff +* InvalidImageName + +### Example Alert + +#### Firing + +``` +[FIRING:1] EnvoyPodsError - warning +Alert: Pod prod-scalardl-envoy-xxxx-yyyy in namespace default has an error status - warning + Description: Pod prod-scalardl-envoy-xxxx-yyyy in namespace default has been in pending status for more than 1 minutes. + Details: + • alertname: EnvoyPodsError + • deployment: prod-scalardl-envoy +``` + +#### Resolved + +``` +[RESOLVED:1] EnvoyPodsError - warning +Alert: Pod prod-scalardl-envoy-xxxx-yyyy in namespace default has an error status - warning + Description: Pod prod-scalardl-envoy-xxxx-yyyy in namespace default has been in pending status for more than 1 minutes. + Details: + • alertname: EnvoyPodsError + • deployment: prod-scalardl-envoy +``` + +### Action Needed + +* Check a kubernetes deployment with `kubectl describe pod prod-scalardl-envoy-xxxx-yyyy` +* Check the log server to pinpoint the root cause of a failure with kubernetes logs on the monitor server `/log/kubernetes//-/kube.log` diff --git a/versioned_docs/version-3.9/scalar-kubernetes/alerts/Ledger.mdx b/versioned_docs/version-3.9/scalar-kubernetes/alerts/Ledger.mdx new file mode 100644 index 00000000..72d14971 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/alerts/Ledger.mdx @@ -0,0 +1,150 @@ +--- +displayed_sidebar: docsEnglish +--- + +# Ledger Alerts + +## LedgerClusterDown + +This is the most critical alert and indicates that an Ledger cluster is not able to process requests. This alert should be handled with the highest priority. + +### Example Alert + +#### Firing + +``` +[FIRING:1] LedgerClusterDown - critical +Alert: Ledger cluster is down - critical + Description: Ledger cluster is down, no resquest can be process. + Details: + • alertname: LedgerClusterDown + • deployment: prod-scalardl-ledger +``` + +#### Resolved + +``` +[RESOLVED] LedgerClusterDown - critical +Alert: Ledger cluster is down - critical + Description: Ledger cluster is down, no resquest can be process. + Details: + • alertname: LedgerClusterDown + • deployment: prod-scalardl-ledger +``` + +### Action Needed + +* Check the number of replicas set `kubectl get deployments. prod-scalardl-ledger` +* Check the number of replicas set `kubectl describe deployments. prod-scalardl-ledger` +* Check nodes statuses with `kubectl get node -o wide` +* Check the log server to pinpoint the root cause of a failure with kubernetes logs on the monitor server `/log/kubernetes//-/kube.log` +* Check a cloud provider to see if there is any known issue. For example, you can check statues [here](https://status.azure.com/en-us/status) in Azure. + +## LedgerClusterDegraded + +This alert lets you know if a kubernetes cluster cannot start ledger pods, which means that the cluster does not have enough resource or lost of one or many kubernetes nodes to run the deployment. + +### Example Alert + +#### Firing + +``` +[FIRING:1] LedgerClusterDegraded - warning +Alert: Ledger cluster is running in a degraded mode - warning + Description: Ledger cluster is running in a degraded mode, some of the Ledger pods are not healthy. + Details: + • alertname: LedgerClusterDegraded + • deployment: prod-scalardl-ledger +``` + +#### Resolved + +``` +[RESOLVED] LedgerClusterDegraded - warning +Alert: Ledger cluster is running in a degraded mode - warning + Description: Ledger cluster is running in a degraded mode, some of the Ledger pods are not healthy. + Details: + • alertname: LedgerClusterDegraded + • deployment: prod-scalardl-ledger +``` + +### Action Needed + +* Check the log server to pinpoint the root cause of a failure with kubernetes logs on the monitor server `/log/kubernetes//-/kube.log` +* Check kubernetes deployment with `kubectl describe deployments prod-scalardl-ledger` +* Check replica set with `kubectl get replicasets.apps` +* Check nodes statuses with `kubectl get node -o wide` +* Check a cloud provider to see if there is any known issue. For example, you can check statues [here](https://status.azure.com/en-us/status) in Azure. + +## LedgerPodsPending + +This alert lets you know if a kubernetes cluster cannot start ledger pods, which means that the cluster does not have the enough resource. + +### Example Alert + +#### Firing + +``` +[FIRING:1] LedgerPodsPending - warning +Alert: Pod prod-scalardl-ledger-xxxx-yyyy in namespace default in pending status - warning + Description: Pod prod-scalardl-ledger-xxxx-yyyy in namespace default has been in pending status for more than 1 minute. + Details: + • alertname: LedgerPodsPending + • deployment: prod-scalardl-ledger +``` + +#### Resolved + +``` +[RESOLVED:1] LedgerPodsPending - warning +Alert: Pod prod-scalardl-ledger-xxxx-yyyy in namespace default in pending status - warning + Description: Pod prod-scalardl-ledger-xxxx-yyyy in namespace default has been in pending status for more than 1 minute. + Details: + • alertname: LedgerPodsPending + • deployment: prod-scalardl-ledger +``` + +### Action Needed + +* Check log server to pinpoint root cause of failure with the kubernetes logs on the monitor server `/log/kubernetes//-/kube.log` +* Check the kubernetes deployment with `kubectl describe pod prod-scalardl-ledger-xxxx-yyyy` + +## LedgerPodsError + +This alert lets you know if a kubernetes cluster cannot start ledger pods for one of the following reasons: + +* CrashLoopBackOff +* CreateContainerConfigError +* CreateContainerError +* ErrImagePull +* ImagePullBackOff +* InvalidImageName + +### Example Alert + +#### Firing + +``` +[FIRING:1] LedgerPodsError - warning +Alert: Pod prod-scalardl-ledger-xxxx-yyyy in namespace default has an error status - warning + Description: Pod prod-scalardl-ledger-xxxx-yyyy in namespace default has been in pending status for more than 1 minutes. + Details: + • alertname: LedgerPodsError + • deployment: prod-scalardl-ledger +``` + +#### Resolved + +``` +[RESOLVED:1] LedgerPodsError - warning +Alert: Pod prod-scalardl-ledger-xxxx-yyyy in namespace default has an error status - warning + Description: Pod prod-scalardl-ledger-xxxx-yyyy in namespace default has been in pending status for more than 1 minutes. + Details: + • alertname: LedgerPodsError + • deployment: prod-scalardl-ledger +``` + +### Action Needed + +* Check the kubernetes deployment with `kubectl describe pod prod-scalardl-ledger-xxxx-yyyy` +* Check log server to pinpoint root cause of failure with the kubernetes logs on the monitor server `/log/kubernetes//-/kube.log` diff --git a/versioned_docs/version-3.9/scalar-kubernetes/alerts/README.mdx b/versioned_docs/version-3.9/scalar-kubernetes/alerts/README.mdx new file mode 100644 index 00000000..6048b065 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/alerts/README.mdx @@ -0,0 +1,13 @@ +--- +tags: + - Enterprise Standard + - Enterprise Premium +displayed_sidebar: docsEnglish +--- + +# Scalar Alerts + +This section covers the types of alerts and what actions need to be taken. + +* [Envoy Alerts](Envoy.mdx) +* [Ledger Alerts](Ledger.mdx) diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDB_Cluster_Direct_Kubernetes_Mode.drawio b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDB_Cluster_Direct_Kubernetes_Mode.drawio new file mode 100644 index 00000000..521191ae --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDB_Cluster_Direct_Kubernetes_Mode.drawio @@ -0,0 +1,299 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDB_Cluster_Indirect_Mode.drawio b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDB_Cluster_Indirect_Mode.drawio new file mode 100644 index 00000000..d70abc18 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDB_Cluster_Indirect_Mode.drawio @@ -0,0 +1,319 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDB_Server_App_In_Cluster.drawio b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDB_Server_App_In_Cluster.drawio new file mode 100644 index 00000000..b03959f1 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDB_Server_App_In_Cluster.drawio @@ -0,0 +1,299 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDB_Server_App_Out_Cluster.drawio b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDB_Server_App_Out_Cluster.drawio new file mode 100644 index 00000000..170180e4 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDB_Server_App_Out_Cluster.drawio @@ -0,0 +1,319 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDL_Auditor_Multi_Account.drawio b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDL_Auditor_Multi_Account.drawio new file mode 100644 index 00000000..f720b59b --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDL_Auditor_Multi_Account.drawio @@ -0,0 +1,633 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDL_Auditor_Multi_Namespace.drawio b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDL_Auditor_Multi_Namespace.drawio new file mode 100644 index 00000000..b83c95c0 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDL_Auditor_Multi_Namespace.drawio @@ -0,0 +1,528 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDL_Auditor_Multi_VNet.drawio b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDL_Auditor_Multi_VNet.drawio new file mode 100644 index 00000000..b3b921a4 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDL_Auditor_Multi_VNet.drawio @@ -0,0 +1,627 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDL_Ledger.drawio b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDL_Ledger.drawio new file mode 100644 index 00000000..8d5e6d1c --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/AKS_ScalarDL_Ledger.drawio @@ -0,0 +1,344 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDB_Cluster_Direct_Kubernetes_Mode.drawio b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDB_Cluster_Direct_Kubernetes_Mode.drawio new file mode 100644 index 00000000..bf20bbeb --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDB_Cluster_Direct_Kubernetes_Mode.drawio @@ -0,0 +1,277 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDB_Cluster_Indirect_Mode.drawio b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDB_Cluster_Indirect_Mode.drawio new file mode 100644 index 00000000..4258fdb1 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDB_Cluster_Indirect_Mode.drawio @@ -0,0 +1,310 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDB_Server_App_In_Cluster.drawio b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDB_Server_App_In_Cluster.drawio new file mode 100644 index 00000000..78f1214b --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDB_Server_App_In_Cluster.drawio @@ -0,0 +1,277 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDB_Server_App_Out_Cluster.drawio b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDB_Server_App_Out_Cluster.drawio new file mode 100644 index 00000000..20c85a77 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDB_Server_App_Out_Cluster.drawio @@ -0,0 +1,310 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDL_Auditor_Multi_Account.drawio b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDL_Auditor_Multi_Account.drawio new file mode 100644 index 00000000..07866748 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDL_Auditor_Multi_Account.drawio @@ -0,0 +1,588 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDL_Auditor_Multi_Namespace.drawio b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDL_Auditor_Multi_Namespace.drawio new file mode 100644 index 00000000..ed88e727 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDL_Auditor_Multi_Namespace.drawio @@ -0,0 +1,494 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDL_Auditor_Multi_VPC.drawio b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDL_Auditor_Multi_VPC.drawio new file mode 100644 index 00000000..8a0eaab0 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDL_Auditor_Multi_VPC.drawio @@ -0,0 +1,578 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDL_Ledger.drawio b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDL_Ledger.drawio new file mode 100644 index 00000000..88d415c4 --- /dev/null +++ b/versioned_docs/version-3.9/scalar-kubernetes/images/drawio/EKS_ScalarDL_Ledger.drawio @@ -0,0 +1,310 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDB_Cluster_Direct_Kubernetes_Mode.drawio.png b/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDB_Cluster_Direct_Kubernetes_Mode.drawio.png new file mode 100644 index 00000000..5ceef088 Binary files /dev/null and b/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDB_Cluster_Direct_Kubernetes_Mode.drawio.png differ diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDB_Cluster_Indirect_Mode.drawio.png b/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDB_Cluster_Indirect_Mode.drawio.png new file mode 100644 index 00000000..feef0d81 Binary files /dev/null and b/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDB_Cluster_Indirect_Mode.drawio.png differ diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDB_Server_App_In_Cluster.drawio.png b/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDB_Server_App_In_Cluster.drawio.png new file mode 100644 index 00000000..c6c9e06e Binary files /dev/null and b/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDB_Server_App_In_Cluster.drawio.png differ diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDB_Server_App_Out_Cluster.drawio.png b/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDB_Server_App_Out_Cluster.drawio.png new file mode 100644 index 00000000..028fbe7c Binary files /dev/null and b/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDB_Server_App_Out_Cluster.drawio.png differ diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDL_Auditor_Multi_Account.drawio.png b/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDL_Auditor_Multi_Account.drawio.png new file mode 100644 index 00000000..76e1aa16 Binary files /dev/null and b/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDL_Auditor_Multi_Account.drawio.png differ diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDL_Auditor_Multi_Namespace.drawio.png b/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDL_Auditor_Multi_Namespace.drawio.png new file mode 100644 index 00000000..026b4a2d Binary files /dev/null and b/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDL_Auditor_Multi_Namespace.drawio.png differ diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDL_Auditor_Multi_VNet.drawio.png b/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDL_Auditor_Multi_VNet.drawio.png new file mode 100644 index 00000000..92eba96d Binary files /dev/null and b/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDL_Auditor_Multi_VNet.drawio.png differ diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDL_Ledger.drawio.png b/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDL_Ledger.drawio.png new file mode 100644 index 00000000..9ee4fd22 Binary files /dev/null and b/versioned_docs/version-3.9/scalar-kubernetes/images/png/AKS_ScalarDL_Ledger.drawio.png differ diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDB_Cluster_Direct_Kubernetes_Mode.drawio.png b/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDB_Cluster_Direct_Kubernetes_Mode.drawio.png new file mode 100644 index 00000000..00fef239 Binary files /dev/null and b/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDB_Cluster_Direct_Kubernetes_Mode.drawio.png differ diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDB_Cluster_Indirect_Mode.drawio.png b/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDB_Cluster_Indirect_Mode.drawio.png new file mode 100644 index 00000000..db122e17 Binary files /dev/null and b/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDB_Cluster_Indirect_Mode.drawio.png differ diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDB_Server_App_In_Cluster.drawio.png b/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDB_Server_App_In_Cluster.drawio.png new file mode 100644 index 00000000..c49fbe4f Binary files /dev/null and b/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDB_Server_App_In_Cluster.drawio.png differ diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDB_Server_App_Out_Cluster.drawio.png b/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDB_Server_App_Out_Cluster.drawio.png new file mode 100644 index 00000000..d8dcde16 Binary files /dev/null and b/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDB_Server_App_Out_Cluster.drawio.png differ diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDL_Auditor_Multi_Account.drawio.png b/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDL_Auditor_Multi_Account.drawio.png new file mode 100644 index 00000000..1d9e7889 Binary files /dev/null and b/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDL_Auditor_Multi_Account.drawio.png differ diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDL_Auditor_Multi_Namespace.drawio.png b/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDL_Auditor_Multi_Namespace.drawio.png new file mode 100644 index 00000000..bea249f3 Binary files /dev/null and b/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDL_Auditor_Multi_Namespace.drawio.png differ diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDL_Auditor_Multi_VPC.drawio.png b/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDL_Auditor_Multi_VPC.drawio.png new file mode 100644 index 00000000..30d5af46 Binary files /dev/null and b/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDL_Auditor_Multi_VPC.drawio.png differ diff --git a/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDL_Ledger.drawio.png b/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDL_Ledger.drawio.png new file mode 100644 index 00000000..ce5fd7b5 Binary files /dev/null and b/versioned_docs/version-3.9/scalar-kubernetes/images/png/EKS_ScalarDL_Ledger.drawio.png differ