Skip to content

Commit

Permalink
Fix Docusaurus deployment (#210)
Browse files Browse the repository at this point in the history
* Change links from `.md#` to `.mdx#`

* Fix `Edit this page` button

* Specify URL for logo

If the URL doesn't include the slash, clicking the logo in the navbar will not redirect users to the docs landing page

* Fix misspelling

* Add correct doc for this version

* Comment out Algolia Docsearch

Algolia DocSearch isn't working, so will use Lunr search in the meantime.

* Add `docusaurus-lunr-search` plugin for search

This search plugin is temporary until we can get Algolia DocSearch to work.
  • Loading branch information
josh-wong authored Apr 16, 2024
1 parent 1586266 commit 5d46bad
Show file tree
Hide file tree
Showing 358 changed files with 1,715 additions and 1,849 deletions.
4 changes: 2 additions & 2 deletions docs/backup-restore.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ stop the Cassandra cluster and take the copies of all the nodes of the cluster,

To avoid mistakes, it is recommended to use [Cassy](https://github.com/scalar-labs/cassy).
Cassy is also integrated with [`scalar-admin`](https://github.com/scalar-labs/scalar-admin) so it can issue a pause request to the application of a Cassandra cluster.
Please see [the doc](https://github.com/scalar-labs/cassy/blob/master/docs/getting-started.md#take-cluster-wide-consistent-backups) for more details.
Please see [the doc](https://github.com/scalar-labs/cassy/blob/master/docs/getting-started.mdx#take-cluster-wide-consistent-backups) for more details.

**Cosmos DB**

Expand All @@ -59,7 +59,7 @@ To specify a transactionally-consistent restore point, please pause ScalarDL ser

## Restore Backups of Ledger Databases

To restore backups, you must follow the [Restore Backup](https://github.com/scalar-labs/scalardb/blob/master/docs/backup-restore.md#restore-backup) section.
To restore backups, you must follow the [Restore Backup](https://github.com/scalar-labs/scalardb/blob/master/docs/backup-restore.mdx#restore-backup) section.
You must stop ScalarDL Ledger services before restoring database backups and start the ScalarDL Ledger services after restoring the backups.

## Create/Restore Backups of Auditor Databases
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ scalardbAnalyticsPostgreSQL:

:::note

You must create a secret resource with this name (`scalardb-analytics-postgresql-superuser-password` by default) before you deploy ScalarDB Analytics with PostgreSQL. For details, see [Prepare a secret resource](how-to-deploy-scalardb-analytics-postgresql.md#prepare-a-secret-resource).
You must create a secret resource with this name (`scalardb-analytics-postgresql-superuser-password` by default) before you deploy ScalarDB Analytics with PostgreSQL. For details, see [Prepare a secret resource](how-to-deploy-scalardb-analytics-postgresql.mdx#prepare-a-secret-resource).

:::

Expand Down
2 changes: 1 addition & 1 deletion docs/helm-charts/configure-custom-values-scalardb.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ If you're using AWS or Azure, please refer to the following documents for more d

### Database configurations

You must set `scalardb.databaseProperties`. Please set your `database.properties` to this parameter. Please refer to the [Configure ScalarDB Server](https://github.com/scalar-labs/scalardb/blob/master/docs/scalardb-server.md#configure-scalardb-server) for more details on the configuration of ScalarDB Server.
You must set `scalardb.databaseProperties`. Please set your `database.properties` to this parameter. Please refer to the [Configure ScalarDB Server](https://github.com/scalar-labs/scalardb/blob/master/docs/scalardb-server.mdx#configure-scalardb-server) for more details on the configuration of ScalarDB Server.

```yaml
scalardb:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ You must set a private key file to `scalar.dl.auditor.private_key_path` and a ce

You must also mount the private key file and the certificate file on the ScalarDL Auditor pod.

For more details on how to mount the private key file and the certificate file, refer to [Mount key and certificate files on a pod in ScalarDL Helm Charts](mount-files-or-volumes-on-scalar-pods.md#mount-key-and-certificate-files-on-a-pod-in-scalardl-helm-charts).
For more details on how to mount the private key file and the certificate file, refer to [Mount key and certificate files on a pod in ScalarDL Helm Charts](mount-files-or-volumes-on-scalar-pods.mdx#mount-key-and-certificate-files-on-a-pod-in-scalardl-helm-charts).

## Optional configurations

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ If you set `scalar.dl.ledger.proof.enabled` to `true` (this configuration is req

In this case, you must mount the private key file on the ScalarDL Ledger pod.

For more details on how to mount the private key file, refer to [Mount key and certificate files on a pod in ScalarDL Helm Charts](mount-files-or-volumes-on-scalar-pods.md#mount-key-and-certificate-files-on-a-pod-in-scalardl-helm-charts).
For more details on how to mount the private key file, refer to [Mount key and certificate files on a pod in ScalarDL Helm Charts](mount-files-or-volumes-on-scalar-pods.mdx#mount-key-and-certificate-files-on-a-pod-in-scalardl-helm-charts).

## Optional configurations

Expand Down
2 changes: 1 addition & 1 deletion docs/helm-charts/how-to-deploy-scalardl-auditor.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This document explains how to deploy ScalarDL Auditor using Scalar Helm Charts.

When you deploy ScalarDL Auditor, you must create a Secrete resource to mount the private key file and the certificate file on the ScalarDL Auditor pods.

For more details on how to mount the key and certificate files on the ScalarDL pods, refer to [Mount key and certificate files on a pod in ScalarDL Helm Charts](mount-files-or-volumes-on-scalar-pods.md#mount-key-and-certificate-files-on-a-pod-in-scalardl-helm-charts).
For more details on how to mount the key and certificate files on the ScalarDL pods, refer to [Mount key and certificate files on a pod in ScalarDL Helm Charts](mount-files-or-volumes-on-scalar-pods.mdx#mount-key-and-certificate-files-on-a-pod-in-scalardl-helm-charts).

## Create schemas for ScalarDL Auditor (Deploy ScalarDL Schema Loader)

Expand Down
2 changes: 1 addition & 1 deletion docs/helm-charts/how-to-deploy-scalardl-ledger.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ If you use the [asset proofs](https://github.com/scalar-labs/scalardl/blob/maste

Please refer to the following document for more details on how to mount the key/certificate files on the ScalarDL pods.

* [Mount key and certificate files on a pod in ScalarDL Helm Charts](mount-files-or-volumes-on-scalar-pods.md#mount-key-and-certificate-files-on-a-pod-in-scalardl-helm-charts)
* [Mount key and certificate files on a pod in ScalarDL Helm Charts](mount-files-or-volumes-on-scalar-pods.mdx#mount-key-and-certificate-files-on-a-pod-in-scalardl-helm-charts)

## Create schemas for ScalarDL Ledger (Deploy ScalarDL Schema Loader)

Expand Down
2 changes: 1 addition & 1 deletion docs/how-to-handle-errors.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This document sets out some guidelines for handling errors in ScalarDL.

## Basics

ScalarDL expects users to use [Client SDKs](https://github.com/scalar-labs/scalardl/blob/master/docs/index.md#client-sdks) to properly interact with ScalarDL system.
ScalarDL expects users to use [Client SDKs](https://github.com/scalar-labs/scalardl/blob/master/docs/index.mdx#client-sdks) to properly interact with ScalarDL system.
When an error occurs, the Client SDKs return an Exception (or an Error in Javascript-based SDKs) with a status code to users.
Users are expected to check the status code to identify the cause of errors.

Expand Down
2 changes: 1 addition & 1 deletion docs/how-to-write-contract.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ For more details about `Function`, please check [How to Write Function for Scala

In non-deprecated Contracts like `JacksonBasedContract`, you can send some information to Functions by calling `void setContext(T context)`.
Note that the base Contract class that you use will decide the argument type `T`.
For details on how to receive information from Contracts in Functions, see [Receive information from Contracts](./how-to-write-function.md#receive-information-from-contracts).
For details on how to receive information from Contracts in Functions, see [Receive information from Contracts](./how-to-write-function.mdx#receive-information-from-contracts).

```Java
JsonNode context = getObjectMapper().createObjectNode().put(...);
Expand Down
4 changes: 2 additions & 2 deletions docs/how-to-write-function.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ The old [Function](https://scalar-labs.github.io/scalardl/javadoc/ledger/com/sca

### About the `invoke` arguments

Similar to a Contract using `Ledger` object to manage assets, a Function uses `Database` object to manage records of the underlying database. Note that `Database` implements [ScalarDB](https://github.com/scalar-labs/scalardb) interface so that you can do the CRUD operations base on [the data model](https://github.com/scalar-labs/scalardb/blob/master/docs/design.md#data-model) of ScalarDB.
Similar to a Contract using `Ledger` object to manage assets, a Function uses `Database` object to manage records of the underlying database. Note that `Database` implements [ScalarDB](https://github.com/scalar-labs/scalardb) interface so that you can do the CRUD operations base on [the data model](https://github.com/scalar-labs/scalardb/blob/master/docs/design.mdx#data-model) of ScalarDB.

A `functionArgument` is a runtime argument for the Function specified by the requester. The argument is not digitally signed as opposed to the contract argument so that it can be used to pass data that is stored in the database but it might be deleted at some later point for some reason.

Expand All @@ -112,7 +112,7 @@ A `functionArgument` is a runtime argument for the Function specified by the req

In non-deprecated Functions like `JacksonBasedFunction`, you can receive some information from Contracts by calling `T getContractContext()`.
Note that the return value can be null if Contracts has nothing set and the base Function class that you use will decide the return value type `T`.
For details on how to send information to Functions from Contracts, see [Send information to Functions](./how-to-write-contract.md#send-information-to-functions).
For details on how to send information to Functions from Contracts, see [Send information to Functions](./how-to-write-contract.mdx#send-information-to-functions).

```Java
JsonNode context = getContractContext();
Expand Down
4 changes: 2 additions & 2 deletions docs/scalar-kubernetes/AccessScalarProducts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ If you deploy your application (client) in an environment outside the Kubernetes

You can create a load balancer by setting `envoy.service.type` to `LoadBalancer` in your custom values file. After configuring the custom values file, you can use Scalar Envoy through a Kubernetes service resource by using the load balancer. You can also set the load balancer configurations by using annotations.

For more details on how to configure your custom values file, see [Service configurations](https://github.com/scalar-labs/helm-charts/blob/main/docs/configure-custom-values-envoy.md#service-configurations).
For more details on how to configure your custom values file, see [Service configurations](https://github.com/scalar-labs/helm-charts/blob/main/docs/configure-custom-values-envoy.mdx#service-configurations).

When using a load balancer, you must set the FQDN or IP address of the load balancer in the properties file for the application (client) as follows.

Expand Down Expand Up @@ -132,7 +132,7 @@ The concrete implementation of the load balancer and access method depend on the

You can run client requests to ScalarDB or ScalarDL from a bastion server by running the `kubectl port-forward` command. If you create a ScalarDL Auditor mode environment, however, you must run two `kubectl port-forward` commands with different kubeconfig files from one bastion server to access two Kubernetes clusters.

1. **(ScalarDL Auditor mode only)** In the bastion server for ScalarDL Ledger, configure an existing kubeconfig file or add a new kubeconfig file to access the Kubernetes cluster for ScalarDL Auditor. For details on how to configure the kubeconfig file of each managed Kubernetes cluster, see [Configure kubeconfig](CreateBastionServer.md#configure-kubeconfig).
1. **(ScalarDL Auditor mode only)** In the bastion server for ScalarDL Ledger, configure an existing kubeconfig file or add a new kubeconfig file to access the Kubernetes cluster for ScalarDL Auditor. For details on how to configure the kubeconfig file of each managed Kubernetes cluster, see [Configure kubeconfig](CreateBastionServer.mdx#configure-kubeconfig).
2. Configure port forwarding to each service from the bastion server.
* **ScalarDB Server**
```console
Expand Down
6 changes: 3 additions & 3 deletions docs/scalar-kubernetes/BackupNoSQL.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,10 @@ In this guide, we assume that you are using point-in-time recovery (PITR) or its
* **The ScalarDB or ScalarDL pod names in the `NAME` column.** Write down the pod names so that you can compare those names with the pod names after performing the backup.
* **The ScalarDB or ScalarDL pod status is `Running` in the `STATUS` column.** Confirm that the pods are running before proceeding with the backup. You will need to pause the pods in the next step.
* **The restart count of each pod in the `RESTARTS` column.** Write down the restart count of each pod so that you can compare the count with the restart counts after performing the backup.
2. Pause the ScalarDB or ScalarDL pods by using `scalar-admin`. For details on how to pause the pods, see the [Details on using `scalar-admin`](BackupNoSQL.md#details-on-using-scalar-admin) section in this guide.
2. Pause the ScalarDB or ScalarDL pods by using `scalar-admin`. For details on how to pause the pods, see the [Details on using `scalar-admin`](BackupNoSQL.mdx#details-on-using-scalar-admin) section in this guide.
3. Write down the `pause completed` time. You will need to refer to that time when restoring the data by using the PITR feature.
4. Back up each database by using the backup feature. If you have enabled the automatic backup and PITR features, the managed databases will perform back up automatically. Please note that you should wait for approximately 10 seconds so that you can create a sufficiently long period to avoid a clock skew issue between the client clock and the database clock. This 10-second period is the exact period in which you can restore data by using the PITR feature.
5. Unpause ScalarDB or ScalarDL pods by using `scalar-admin`. For details on how to unpause the pods, see the [Details on using `scalar-admin`](BackupNoSQL.md#details-on-using-scalar-admin) section in this guide.
5. Unpause ScalarDB or ScalarDL pods by using `scalar-admin`. For details on how to unpause the pods, see the [Details on using `scalar-admin`](BackupNoSQL.mdx#details-on-using-scalar-admin) section in this guide.
6. Check the `unpause started` time. You must check the `unpause started` time to confirm the exact period in which you can restore data by using the PITR feature.
7. Check the pod status after performing the backup. You must check the following four points by using the `kubectl get pod` command after the backup operation is completed.
* **The number of ScalarDB or ScalarDL pods.** Confirm this number matches the number of pods that you wrote down before performing the backup.
Expand All @@ -25,7 +25,7 @@ In this guide, we assume that you are using point-in-time recovery (PITR) or its
* **The restart count of each pod in the `RESTARTS` column.** Confirm the counts match the restart counts that you wrote down before performing the backup

**If any of the two values are different, you must retry the backup operation from the beginning.** The reason for the different values may be caused by some pods being added or restarted while performing the backup. In such case, those pods will run in the `unpause` state. Pods in the `unpause` state will cause the backup data to be transactionally inconsistent.
8. **(Amazon DynamoDB only)** If you use the PITR feature of DynamoDB, you will need to perform additional steps to create a backup because the feature restores data with another name table by using PITR. For details on the additional steps after creating the exact period in which you can restore the data, please see [Restore databases in a Kubernetes environment](RestoreDatabase.md#amazon-dynamodb).
8. **(Amazon DynamoDB only)** If you use the PITR feature of DynamoDB, you will need to perform additional steps to create a backup because the feature restores data with another name table by using PITR. For details on the additional steps after creating the exact period in which you can restore the data, please see [Restore databases in a Kubernetes environment](RestoreDatabase.mdx#amazon-dynamodb).

## Back up multiple databases

Expand Down
8 changes: 4 additions & 4 deletions docs/scalar-kubernetes/K8sLogCollectionGuide.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,8 @@ Please get the sample file [scalar-loki-stack-custom-values.yaml](https://github

In the production environment, it is recommended to add labels to the worker node for Scalar products as follows.

* [EKS - Add a label to the worker node that is used for nodeAffinity](https://github.com/scalar-labs/scalar-kubernetes/blob/master/docs/CreateEKSClusterForScalarProducts.md#add-a-label-to-the-worker-node-that-is-used-for-nodeaffinity)
* [AKS - Add a label to the worker node that is used for nodeAffinity](https://github.com/scalar-labs/scalar-kubernetes/blob/master/docs/CreateAKSClusterForScalarProducts.md#add-a-label-to-the-worker-node-that-is-used-for-nodeaffinity)
* [EKS - Add a label to the worker node that is used for nodeAffinity](https://github.com/scalar-labs/scalar-kubernetes/blob/master/docs/CreateEKSClusterForScalarProducts.mdx#add-a-label-to-the-worker-node-that-is-used-for-nodeaffinity)
* [AKS - Add a label to the worker node that is used for nodeAffinity](https://github.com/scalar-labs/scalar-kubernetes/blob/master/docs/CreateAKSClusterForScalarProducts.mdx#add-a-label-to-the-worker-node-that-is-used-for-nodeaffinity)

Since the promtail pods deployed in this document collect only Scalar product logs, it is sufficient to deploy promtail pods only on the worker node where Scalar products are running. So, you should set nodeSelector in the custom values file (scalar-loki-stack-custom-values.yaml) as follows if you add labels to your Kubernetes worker node.

Expand Down Expand Up @@ -70,8 +70,8 @@ Since the promtail pods deployed in this document collect only Scalar product lo
In the production environment, it is recommended to add taints to the worker node for Scalar products as follows.
* [EKS - Add taint to the worker node that is used for toleration](https://github.com/scalar-labs/scalar-kubernetes/blob/master/docs/CreateEKSClusterForScalarProducts.md#add-taint-to-the-worker-node-that-is-used-for-toleration)
* [AKS - Add taint to the worker node that is used for toleration](https://github.com/scalar-labs/scalar-kubernetes/blob/master/docs/CreateAKSClusterForScalarProducts.md#add-taint-to-the-worker-node-that-is-used-for-toleration)
* [EKS - Add taint to the worker node that is used for toleration](https://github.com/scalar-labs/scalar-kubernetes/blob/master/docs/CreateEKSClusterForScalarProducts.mdx#add-taint-to-the-worker-node-that-is-used-for-toleration)
* [AKS - Add taint to the worker node that is used for toleration](https://github.com/scalar-labs/scalar-kubernetes/blob/master/docs/CreateAKSClusterForScalarProducts.mdx#add-taint-to-the-worker-node-that-is-used-for-toleration)
Since promtail pods are deployed as DaemonSet, you must set tolerations in the custom values file (scalar-loki-stack-custom-values.yaml) as follows if you add taints to your Kubernetes worker node.
Expand Down
Loading

0 comments on commit 5d46bad

Please sign in to comment.