From 3349ce5f59a4b98295ff16993595df7a97d4edf7 Mon Sep 17 00:00:00 2001 From: "Matt Linville (he/him)" Date: Thu, 12 Sep 2024 14:50:38 -0700 Subject: [PATCH] [DOC-9724] Cloud 2.0 D&O Docs (#18833) * [DOC-9724] Cloud 2.0 D&O Docs * Self-hosted edits for Cloud 2.0 D&O docs * Improvements to security enhancements section * Propagate changes to v23.1, v23.2, v24.1 * Fix links and anchors --- .../cockroachcloud/nodes-limitation.md | 2 +- .../orchestration/kubernetes-limitations.md | 2 +- .../orchestration/local-start-kubernetes.md | 16 +-- .../orchestration/kubernetes-limitations.md | 2 +- .../orchestration/local-start-kubernetes.md | 16 +-- .../orchestration/kubernetes-limitations.md | 2 +- .../orchestration/local-start-kubernetes.md | 16 +-- .../orchestration/kubernetes-limitations.md | 2 +- .../orchestration/local-start-kubernetes.md | 16 +-- .../cockroachcloud/cluster-overview-page.md | 4 +- .../cockroachdb-advanced-on-azure.md | 74 +++------- src/current/cockroachcloud/compliance.md | 26 ++-- .../connect-to-an-advanced-cluster.md | 95 +++++++++++-- .../cockroachcloud/connect-to-your-cluster.md | 46 +++--- .../cockroachcloud/create-a-basic-cluster.md | 30 ++-- .../create-an-advanced-cluster.md | 86 +++++------- .../cockroachcloud/create-your-cluster.md | 52 +++---- .../security-reference/security-overview.md | 2 +- .../security-reference/security-overview.md | 2 +- .../security-reference/security-overview.md | 2 +- .../deploy-cockroachdb-with-kubernetes.md | 5 +- src/current/v23.1/node-shutdown.md | 131 ++++++++++-------- ...estrate-a-local-cluster-with-kubernetes.md | 9 +- ...ate-a-multi-region-cluster-on-localhost.md | 6 +- .../deploy-cockroachdb-with-kubernetes.md | 5 +- src/current/v23.2/node-shutdown.md | 69 +++++---- ...estrate-a-local-cluster-with-kubernetes.md | 9 +- ...ate-a-multi-region-cluster-on-localhost.md | 6 +- .../deploy-cockroachdb-with-kubernetes.md | 4 +- src/current/v24.1/node-shutdown.md | 35 +++-- ...estrate-a-local-cluster-with-kubernetes.md | 11 +- ...ate-a-multi-region-cluster-on-localhost.md | 6 +- .../deploy-cockroachdb-with-kubernetes.md | 4 +- src/current/v24.2/node-shutdown.md | 14 +- ...estrate-a-local-cluster-with-kubernetes.md | 11 +- ...ate-a-multi-region-cluster-on-localhost.md | 6 +- 36 files changed, 440 insertions(+), 384 deletions(-) diff --git a/src/current/_includes/cockroachcloud/nodes-limitation.md b/src/current/_includes/cockroachcloud/nodes-limitation.md index 4c35c8b79d6..b9a28931d8c 100644 --- a/src/current/_includes/cockroachcloud/nodes-limitation.md +++ b/src/current/_includes/cockroachcloud/nodes-limitation.md @@ -1 +1 @@ -CockroachDB {{ site.data.products.cloud }} does not support scaling a multi-node cluster down to a single node. +A multi-node cluster cannot be scaled down to a single node. diff --git a/src/current/_includes/v23.1/orchestration/kubernetes-limitations.md b/src/current/_includes/v23.1/orchestration/kubernetes-limitations.md index 68cace61be0..5e9784c28d1 100644 --- a/src/current/_includes/v23.1/orchestration/kubernetes-limitations.md +++ b/src/current/_includes/v23.1/orchestration/kubernetes-limitations.md @@ -34,4 +34,4 @@ When starting Kubernetes, select machines with at least **4 vCPUs** and **16 GiB #### Storage -At this time, orchestrations of CockroachDB with Kubernetes use external persistent volumes that are often replicated by the provider. Because CockroachDB already replicates data automatically, this additional layer of replication is unnecessary and can negatively impact performance. High-performance use cases on a private Kubernetes cluster may want to consider using [local volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local). +Kubernetes deployments use external persistent volumes that are often replicated by the provider. CockroachDB replicates data automatically, and this redundant layer of replication can impact performance. Using [local volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local) may improve performance. diff --git a/src/current/_includes/v23.1/orchestration/local-start-kubernetes.md b/src/current/_includes/v23.1/orchestration/local-start-kubernetes.md index d2eacbee277..7a62cd98fcc 100644 --- a/src/current/_includes/v23.1/orchestration/local-start-kubernetes.md +++ b/src/current/_includes/v23.1/orchestration/local-start-kubernetes.md @@ -4,21 +4,19 @@ Before getting started, it's helpful to review some Kubernetes-specific terminol Feature | Description --------|------------ -[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | This is the tool you'll use to run a Kubernetes cluster inside a VM on your local workstation. -[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one of more Docker containers. In this tutorial, all pods will run on your local workstation, each containing one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4. -[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5. -[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of storage mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted. -[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When pods are created (one per CockroachDB node), each pod will request a persistent volume claim to “claim” durable storage for its node. +[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | A tool commonly used to run a Kubernetes cluster on a local workstation. +[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one of more containers managed by Kubernetes. In this tutorial, all pods run on your local workstation. Each pod contains a single container that runs a single-node CockroachDB cluster. You'll start with 3 pods and grow to 4. +[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. +[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is storage mounted in a pod and available to its containers. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted. +[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When e pod is created, it requests a persistent volume claim to claim durable storage for its node. ## Step 1. Start Kubernetes -1. Follow Kubernetes' [documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) to install `minikube`, the tool used to run Kubernetes locally, for your OS. This includes installing a hypervisor and `kubectl`, the command-line tool used to manage Kubernetes from your local workstation. - - {{site.data.alerts.callout_info}}Make sure you install minikube version 0.21.0 or later. Earlier versions do not include a Kubernetes server that supports the maxUnavailability field and PodDisruptionBudget resource type used in the CockroachDB StatefulSet configuration.{{site.data.alerts.end}} +1. Follow the [Minikube documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) to install the latest version of `minikube`, a hypervisor, and the `kubectl` command-line tool. 1. Start a local Kubernetes cluster: {% include_cached copy-clipboard.html %} ~~~ shell - $ minikube start + minikube start ~~~ diff --git a/src/current/_includes/v23.2/orchestration/kubernetes-limitations.md b/src/current/_includes/v23.2/orchestration/kubernetes-limitations.md index 68cace61be0..5e9784c28d1 100644 --- a/src/current/_includes/v23.2/orchestration/kubernetes-limitations.md +++ b/src/current/_includes/v23.2/orchestration/kubernetes-limitations.md @@ -34,4 +34,4 @@ When starting Kubernetes, select machines with at least **4 vCPUs** and **16 GiB #### Storage -At this time, orchestrations of CockroachDB with Kubernetes use external persistent volumes that are often replicated by the provider. Because CockroachDB already replicates data automatically, this additional layer of replication is unnecessary and can negatively impact performance. High-performance use cases on a private Kubernetes cluster may want to consider using [local volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local). +Kubernetes deployments use external persistent volumes that are often replicated by the provider. CockroachDB replicates data automatically, and this redundant layer of replication can impact performance. Using [local volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local) may improve performance. diff --git a/src/current/_includes/v23.2/orchestration/local-start-kubernetes.md b/src/current/_includes/v23.2/orchestration/local-start-kubernetes.md index d2eacbee277..7a62cd98fcc 100644 --- a/src/current/_includes/v23.2/orchestration/local-start-kubernetes.md +++ b/src/current/_includes/v23.2/orchestration/local-start-kubernetes.md @@ -4,21 +4,19 @@ Before getting started, it's helpful to review some Kubernetes-specific terminol Feature | Description --------|------------ -[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | This is the tool you'll use to run a Kubernetes cluster inside a VM on your local workstation. -[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one of more Docker containers. In this tutorial, all pods will run on your local workstation, each containing one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4. -[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5. -[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of storage mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted. -[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When pods are created (one per CockroachDB node), each pod will request a persistent volume claim to “claim” durable storage for its node. +[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | A tool commonly used to run a Kubernetes cluster on a local workstation. +[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one of more containers managed by Kubernetes. In this tutorial, all pods run on your local workstation. Each pod contains a single container that runs a single-node CockroachDB cluster. You'll start with 3 pods and grow to 4. +[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. +[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is storage mounted in a pod and available to its containers. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted. +[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When e pod is created, it requests a persistent volume claim to claim durable storage for its node. ## Step 1. Start Kubernetes -1. Follow Kubernetes' [documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) to install `minikube`, the tool used to run Kubernetes locally, for your OS. This includes installing a hypervisor and `kubectl`, the command-line tool used to manage Kubernetes from your local workstation. - - {{site.data.alerts.callout_info}}Make sure you install minikube version 0.21.0 or later. Earlier versions do not include a Kubernetes server that supports the maxUnavailability field and PodDisruptionBudget resource type used in the CockroachDB StatefulSet configuration.{{site.data.alerts.end}} +1. Follow the [Minikube documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) to install the latest version of `minikube`, a hypervisor, and the `kubectl` command-line tool. 1. Start a local Kubernetes cluster: {% include_cached copy-clipboard.html %} ~~~ shell - $ minikube start + minikube start ~~~ diff --git a/src/current/_includes/v24.1/orchestration/kubernetes-limitations.md b/src/current/_includes/v24.1/orchestration/kubernetes-limitations.md index 68cace61be0..5e9784c28d1 100644 --- a/src/current/_includes/v24.1/orchestration/kubernetes-limitations.md +++ b/src/current/_includes/v24.1/orchestration/kubernetes-limitations.md @@ -34,4 +34,4 @@ When starting Kubernetes, select machines with at least **4 vCPUs** and **16 GiB #### Storage -At this time, orchestrations of CockroachDB with Kubernetes use external persistent volumes that are often replicated by the provider. Because CockroachDB already replicates data automatically, this additional layer of replication is unnecessary and can negatively impact performance. High-performance use cases on a private Kubernetes cluster may want to consider using [local volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local). +Kubernetes deployments use external persistent volumes that are often replicated by the provider. CockroachDB replicates data automatically, and this redundant layer of replication can impact performance. Using [local volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local) may improve performance. diff --git a/src/current/_includes/v24.1/orchestration/local-start-kubernetes.md b/src/current/_includes/v24.1/orchestration/local-start-kubernetes.md index d2eacbee277..7a62cd98fcc 100644 --- a/src/current/_includes/v24.1/orchestration/local-start-kubernetes.md +++ b/src/current/_includes/v24.1/orchestration/local-start-kubernetes.md @@ -4,21 +4,19 @@ Before getting started, it's helpful to review some Kubernetes-specific terminol Feature | Description --------|------------ -[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | This is the tool you'll use to run a Kubernetes cluster inside a VM on your local workstation. -[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one of more Docker containers. In this tutorial, all pods will run on your local workstation, each containing one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4. -[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5. -[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of storage mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted. -[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When pods are created (one per CockroachDB node), each pod will request a persistent volume claim to “claim” durable storage for its node. +[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | A tool commonly used to run a Kubernetes cluster on a local workstation. +[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one of more containers managed by Kubernetes. In this tutorial, all pods run on your local workstation. Each pod contains a single container that runs a single-node CockroachDB cluster. You'll start with 3 pods and grow to 4. +[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. +[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is storage mounted in a pod and available to its containers. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted. +[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When e pod is created, it requests a persistent volume claim to claim durable storage for its node. ## Step 1. Start Kubernetes -1. Follow Kubernetes' [documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) to install `minikube`, the tool used to run Kubernetes locally, for your OS. This includes installing a hypervisor and `kubectl`, the command-line tool used to manage Kubernetes from your local workstation. - - {{site.data.alerts.callout_info}}Make sure you install minikube version 0.21.0 or later. Earlier versions do not include a Kubernetes server that supports the maxUnavailability field and PodDisruptionBudget resource type used in the CockroachDB StatefulSet configuration.{{site.data.alerts.end}} +1. Follow the [Minikube documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) to install the latest version of `minikube`, a hypervisor, and the `kubectl` command-line tool. 1. Start a local Kubernetes cluster: {% include_cached copy-clipboard.html %} ~~~ shell - $ minikube start + minikube start ~~~ diff --git a/src/current/_includes/v24.2/orchestration/kubernetes-limitations.md b/src/current/_includes/v24.2/orchestration/kubernetes-limitations.md index 68cace61be0..5e9784c28d1 100644 --- a/src/current/_includes/v24.2/orchestration/kubernetes-limitations.md +++ b/src/current/_includes/v24.2/orchestration/kubernetes-limitations.md @@ -34,4 +34,4 @@ When starting Kubernetes, select machines with at least **4 vCPUs** and **16 GiB #### Storage -At this time, orchestrations of CockroachDB with Kubernetes use external persistent volumes that are often replicated by the provider. Because CockroachDB already replicates data automatically, this additional layer of replication is unnecessary and can negatively impact performance. High-performance use cases on a private Kubernetes cluster may want to consider using [local volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local). +Kubernetes deployments use external persistent volumes that are often replicated by the provider. CockroachDB replicates data automatically, and this redundant layer of replication can impact performance. Using [local volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local) may improve performance. diff --git a/src/current/_includes/v24.2/orchestration/local-start-kubernetes.md b/src/current/_includes/v24.2/orchestration/local-start-kubernetes.md index d2eacbee277..7a62cd98fcc 100644 --- a/src/current/_includes/v24.2/orchestration/local-start-kubernetes.md +++ b/src/current/_includes/v24.2/orchestration/local-start-kubernetes.md @@ -4,21 +4,19 @@ Before getting started, it's helpful to review some Kubernetes-specific terminol Feature | Description --------|------------ -[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | This is the tool you'll use to run a Kubernetes cluster inside a VM on your local workstation. -[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one of more Docker containers. In this tutorial, all pods will run on your local workstation, each containing one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4. -[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5. -[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of storage mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted. -[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When pods are created (one per CockroachDB node), each pod will request a persistent volume claim to “claim” durable storage for its node. +[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | A tool commonly used to run a Kubernetes cluster on a local workstation. +[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one of more containers managed by Kubernetes. In this tutorial, all pods run on your local workstation. Each pod contains a single container that runs a single-node CockroachDB cluster. You'll start with 3 pods and grow to 4. +[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. +[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is storage mounted in a pod and available to its containers. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted. +[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When e pod is created, it requests a persistent volume claim to claim durable storage for its node. ## Step 1. Start Kubernetes -1. Follow Kubernetes' [documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) to install `minikube`, the tool used to run Kubernetes locally, for your OS. This includes installing a hypervisor and `kubectl`, the command-line tool used to manage Kubernetes from your local workstation. - - {{site.data.alerts.callout_info}}Make sure you install minikube version 0.21.0 or later. Earlier versions do not include a Kubernetes server that supports the maxUnavailability field and PodDisruptionBudget resource type used in the CockroachDB StatefulSet configuration.{{site.data.alerts.end}} +1. Follow the [Minikube documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) to install the latest version of `minikube`, a hypervisor, and the `kubectl` command-line tool. 1. Start a local Kubernetes cluster: {% include_cached copy-clipboard.html %} ~~~ shell - $ minikube start + minikube start ~~~ diff --git a/src/current/cockroachcloud/cluster-overview-page.md b/src/current/cockroachcloud/cluster-overview-page.md index 149fcc53618..8d337b56469 100644 --- a/src/current/cockroachcloud/cluster-overview-page.md +++ b/src/current/cockroachcloud/cluster-overview-page.md @@ -77,8 +77,8 @@ The **Cluster configuration** panel displays the settings you chose during [clus | Plan type | The [plan type]({% link cockroachcloud/create-an-advanced-cluster.md %}#step-1-start-the-cluster-creation-process) used to create the cluster. | | Regions | The cluster's [region]({% link cockroachcloud/create-an-advanced-cluster.md %}#step-3-configure-regions-and-nodes). | | Nodes | The [number of nodes]({% link cockroachcloud/create-an-advanced-cluster.md %}#step-3-configure-regions-and-nodes) the cluster has and the status of each. | -| Compute | The cluster's [compute power per node]({% link cockroachcloud/create-an-advanced-cluster.md %}#step-5-configure-cluster-capacity). | -| Storage | The cluster's [storage per node]({% link cockroachcloud/create-an-advanced-cluster.md %}#step-5-configure-cluster-capacity). | +| Compute | The cluster's [compute power per node]({% link cockroachcloud/create-an-advanced-cluster.md %}#step-4-configure-cluster-capacity). | +| Storage | The cluster's [storage per node]({% link cockroachcloud/create-an-advanced-cluster.md %}#step-4-configure-cluster-capacity). | ## PCI ready (with Security add-on) diff --git a/src/current/cockroachcloud/cockroachdb-advanced-on-azure.md b/src/current/cockroachcloud/cockroachdb-advanced-on-azure.md index 9841e99c37f..499f95ac04a 100644 --- a/src/current/cockroachcloud/cockroachdb-advanced-on-azure.md +++ b/src/current/cockroachcloud/cockroachdb-advanced-on-azure.md @@ -1,73 +1,31 @@ --- -title: CockroachDB Dedicated on Azure -summary: Learn about limitations and FAQs about CockroachDB Dedicated on Microsoft Azure. +title: CockroachDB Advanced on Azure +summary: Learn about limitations and FAQs about CockroachDB Advanced on Microsoft Azure. toc: true toc_not_nested: true docs_area: deploy --- -This page provides information about CockroachDB {{ site.data.products.dedicated }} clusters on Microsoft Azure, including frequently asked questions and limitations. To create a cluster, refer to [Create a CockroachDB {{ site.data.products.dedicated }} Cluster]({% link cockroachcloud/create-your-cluster.md %}). +This page provides information about CockroachDB {{ site.data.products.advanced }} clusters on Microsoft Azure, including frequently asked questions and limitations. To create a cluster, refer to [Create a CockroachDB {{ site.data.products.advanced }} Cluster]({% link cockroachcloud/create-an-advanced-cluster.md %}). -## Limitations +To express interest or request more information about a given limitation, contact your Cockroach Labs account team. -CockroachDB {{ site.data.products.dedicated }} clusters on Azure have the following temporary limitations. To express interest or request more information about a given limitation, contact your Cockroach Labs account team. For more details, refer to the [FAQs](#faqs). -A cluster must have at minimum three nodes. A multi-region cluster must have at minimum three nodes per region. Single-node clusters are not supported on Azure. -[PCI-Ready]({% link cockroachcloud/pci-dss.md %}) features are not yet available on Azure. To express interest, contact your Cockroach Labs account team. +CockroachDB {{ site.data.products.advanced }} clusters on Azure have the following temporary limitations. To express interest or request more information about a given limitation, contact your Cockroach Labs account team. -- [Private Clusters]({% link cockroachcloud/private-clusters.md %}) -- [Customer Managed Encryption Keys (CMEK)]({% link cockroachcloud/cmek.md %}) -- [Egress Perimeter Controls]({% link cockroachcloud/egress-perimeter-controls.md %}) +- A cluster must have at minimum three nodes. A multi-region cluster must have at minimum three nodes per region. Single-node clusters are not supported on Azure. +- The following [PCI-Ready]({% link cockroachcloud/pci-dss.md %}) and HIPAA features are not yet available on Azure. However, CockroachDB {{ site.data.products.advanced }} on Azure meets or exceeds the requirements of SOC 2 Type 2. Refer to [Regulatory Compliance in CockroachDB {{ site.data.products.advanced }}]({% link cockroachcloud/compliance.md %}). + - [Private Clusters]({% link cockroachcloud/private-clusters.md %}) + - [Customer Managed Encryption Keys (CMEK)]({% link cockroachcloud/cmek.md %}) + - [Egress Perimeter Controls]({% link cockroachcloud/egress-perimeter-controls.md %}) -## FAQs + You can configure IP allowlisting to limit the IP addresses or CIDR ranges that can access a CockroachDB {{ site.data.products.dedicated }} cluster on Azure, and you can use [Azure Private Link](https://learn.microsoft.com/azure/private-link/private-link-overview) to connect your applications in Azure to your cluster and avoid exposing your cluster or applications to the public internet. Refer to [Connect to your cluster]({% link cockroachcloud/connect-to-your-cluster.md %}#azure-private-link). -The following sections provide more details about CockroachDB {{ site.data.products.dedicated }} on Azure. +## Change data capture -### Can CockroachDB {{ site.data.products.serverless }} clusters be deployed on Azure? +CockroachDB {{ site.data.products.advanced }} supports [changefeeds](https://www.cockroachlabs.com/docs/{{ site.current_cloud_version }}/changefeed-messages), which allow your cluster to send data events in real-time to a [downstream sink](https://www.cockroachlabs.com/docs/{{ site.current_cloud_version }}/changefeed-sinks.html). [Azure Event Hubs](https://learn.microsoft.com/azure/event-hubs/azure-event-hubs-kafka-overview) provides an Azure-native service that can be used with a Kafka endpoint as a sink. -CockroachDB {{ site.data.products.serverless }} is not currently available on Azure. +## Disaster recovery -### Can we use {{ site.data.products.db }} credits to pay for clusters on Azure? +[Managed-service backups]({% link cockroachcloud/use-managed-service-backups.md %}?filters=advanced) automatically back up clusters in CockroachDB {{ site.data.products.cloud }}. -Yes, a CockroachDB {{ site.data.products.cloud }} organization can pay for the usage of CockroachDB {{ site.data.products.dedicated }} clusters on Azure with {{ site.data.products.db }} credits. To add additional credits to your CockroachDB {{ site.data.products.cloud }} organization, contact your Cockroach Labs account team. - -### Can we migrate from PostgreSQL to CockroachDB {{ site.data.products.dedicated }} on Azure? - -CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and the majority of PostgreSQL syntax. Refer to [Supported SQL Feature Support]({% link {{ site.current_cloud_version }}/sql-feature-support.md %}). The same CockroachDB binaries are used across CockroachDB {{ site.data.products.cloud }} deployment environments, and all SQL features behave the same on Azure as on GCP or AWS. - -### What kind of compute and storage resources are used? - -{{ site.data.products.dedicated }} clusters on Azure use [Dsv4-series VMs](https://learn.microsoft.com/azure/virtual-machines/dv4-dsv4-series) and [Premium SSDs](https://learn.microsoft.com/azure/virtual-machines/disks-types#premium-ssds). This configuration was selected for its optimum price-performance ratio after thorough performance testing across VM families and storage types. - -### What backup and restore options are available for clusters on Azure? - -[Managed-service backups]({% link cockroachcloud/use-managed-service-backups.md %}?filters=dedicated) automatically back up clusters on Azure, and customers can [take and restore from manual backups to Azure storage]({% link cockroachcloud/take-and-restore-customer-owned-backups.md %}) ([Blob Storage](https://azure.microsoft.com/products/storage/blobs) or [ADLS Gen 2](https://learn.microsoft.com/azure/storage/blobs/data-lake-storage-introduction)). Refer to the blog post [CockroachDB locality-aware Backups for Azure Blob](https://www.cockroachlabs.com/blog/locality-aware-backups-azure-blob/) for an example. - -You can [take and restore from encrypted backups]({% link cockroachcloud/take-and-restore-customer-owned-backups.md %}) on Azure storage by using an RSA key stored in [Azure Key Vault](https://learn.microsoft.com/azure/key-vault/keys/about-keys). - -### Are changefeeds available? - -Yes, customers can create and configure [changefeeds]({% link {{ site.current_cloud_version }}/changefeed-messages.md %}) to send data events in real-time from a CockroachDB {{ site.data.products.dedicated }} cluster to a [downstream sink]({% link {{ site.current_cloud_version }}/changefeed-sinks.md %}) such as Kafka, Azure storage, or Webhook. [Azure Event Hubs](https://learn.microsoft.com/azure/event-hubs/azure-event-hubs-kafka-overview) provides an Azure-native service that can be used with a Kafka endpoint as a sink. - -### What secure and centralized authentication methods are available for {{ site.data.products.dedicated }} clusters on Azure? - -Human users can connect using [Cluster SSO]({% link cockroachcloud/cloud-sso-sql.md %}), [client certificates]({% link {{ site.current_cloud_version }}/authentication.md %}#using-digital-certificates-with-cockroachdb), or the [`ccloud` command]({% link cockroachcloud/ccloud-get-started.md %}) or SQL clients. - -Application users can connect using [JWT tokens]({% link {{ site.current_cloud_version }}/sso-sql.md %}) or [client certificates]({% link {{ site.current_cloud_version }}/authentication.md %}#using-digital-certificates-with-cockroachdb). - -### Can we use private connectivity methods, such as Private Link, to securely connect to a cluster on Azure? - -You can configure IP allowlisting to limit the IP addresses or CIDR ranges that can access a CockroachDB {{ site.data.products.dedicated }} cluster on Azure, and you can use [Azure Private Link](https://learn.microsoft.com/azure/private-link/private-link-overview) to connect your applications in Azure to your cluster and avoid exposing your cluster or applications to the public internet. Refer to [Connect to your cluster]({% link cockroachcloud/connect-to-your-cluster.md %}#azure-private-link). - -### How are clusters on Azure isolated from each other? Do they follow a similar approach as on AWS and GCP? - -CockroachDB {{ site.data.products.cloud }} follows a similar tenant isolation approach on Azure as on GCP and AWS. Each {{ site.data.products.dedicated }} cluster is created on an [AKS cluster](https://azure.microsoft.com/products/kubernetes-service) in a unique [VNet](https://learn.microsoft.com/azure/virtual-network/virtual-networks-overview). Implementation details are subject to change. - -### How is data encrypted at rest in a cluster on Azure? - -Customer data at rest on cluster disks is encrypted using [server-side encryption of Azure disk storage](https://learn.microsoft.com/azure/virtual-machines/disk-encryption). [Customer-Managed Encryption Keys (CMEK)]({% link cockroachcloud/cmek.md %}) are not yet available. To express interest, contact your Cockroach Labs account team. - -All client connections to a CockroachDB {{ site.data.products.dedicated }} cluster, as well as connections between nodes, are encrypted using TLS. - -### Do CockroachDB {{ site.data.products.dedicated }} clusters on Azure comply with SOC 2? - -CockroachDB Dedicated on Azure meets or exceeds the requirements of SOC 2 Type 2. Refer to [Regulatory Compliance in CockroachDB {{ site.data.products.dedicated }}]({% link cockroachcloud/compliance.md %}). +You can [take and restore from manual backups]({% link cockroachcloud/take-and-restore-customer-owned-backups.md %}) to Azure ([Blob Storage](https://azure.microsoft.com/products/storage/blobs) or [ADLS Gen 2](https://learn.microsoft.com/azure/storage/blobs/data-lake-storage-introduction)). Refer to the blog post [CockroachDB locality-aware Backups for Azure Blob](https://www.cockroachlabs.com/blog/locality-aware-backups-azure-blob/) for an example. To encrypt manual backups using an RSA key, refer to the [Azure Key Vault](https://learn.microsoft.com/azure/key-vault/keys/about-keys) documentation. diff --git a/src/current/cockroachcloud/compliance.md b/src/current/cockroachcloud/compliance.md index 2a61ebb050e..9e5afef16cc 100644 --- a/src/current/cockroachcloud/compliance.md +++ b/src/current/cockroachcloud/compliance.md @@ -1,20 +1,30 @@ --- -title: Regulatory Compliance in CockroachDB Dedicated -summary: Learn about the regulatory and compliance standards met by CockroachDB Dedicated. +title: Regulatory Compliance in CockroachDB advanced +summary: Learn about the regulatory and compliance standards met by CockroachDB advanced. toc: true docs_area: manage.security --- -When configured correctly, CockroachDB {{ site.data.products.dedicated }} meets the requirements of the following regulatory and compliance standards: +When configured correctly, CockroachDB {{ site.data.products.advanced }} meets the requirements of the following regulatory and compliance standards: -- **System and Organization Controls (SOC) 2 Type 2**: CockroachDB {{ site.data.products.dedicated }} standard and advanced meet or exceed the requirements of SOC 2 Type 2, which is established and administered by the American Institute of Certified Public Accountants (AICPA). This certification means that the design and implementation of the controls and procedures that protect CockroachDB {{ site.data.products.dedicated }} meet the relevant trust objectives both at a point in time and over a period of time. +## SOC 2 Type 2 + +CockroachDB {{ site.data.products.advanced }} standard and advanced meet or exceed the requirements of SOC 2 Type 2, which is established and administered by the American Institute of Certified Public Accountants (AICPA). This certification means that the design and implementation of the controls and procedures that protect CockroachDB {{ site.data.products.advanced }} meet the relevant trust objectives both at a point in time and over a period of time. To learn more, refer to [SOC 2 Type 2 certification](https://www.cockroachlabs.com/blog/soc-2-compliance-2/) in the CockroachDB blog or contact your Cockroach Labs account representative. -- **Payment Card Industry Data Security Standard (PCI DSS)**: CockroachDB {{ site.data.products.dedicated }} advanced has been certified by a PCI Qualified Security Assessor (QSA) as a PCI DSS Level 1 Service Provider. When configured appropriately, CockroachDB {{ site.data.products.dedicated }} advanced meets the requirements of PCI DSS 3.2.1. PCI DSS is mandated by credit card issuers but administered by the [Payment Card Industry Security Standards Council](https://www.pcisecuritystandards.org/). Many organizations that do not store cardholder data still rely on compliance with PCI DSS to help protect other sensitive or confidential data or metadata. +## PCI DSS + +CockroachDB {{ site.data.products.advanced }} advanced has been certified by a PCI Qualified Security Assessor (QSA) as a PCI DSS Level 1 Service Provider. When configured appropriately, CockroachDB {{ site.data.products.advanced }} advanced meets the requirements of PCI DSS 3.2.1. PCI DSS is mandated by credit card issuers but administered by the [Payment Card Industry Security Standards Council](https://www.pcisecuritystandards.org/). Many organizations that do not store cardholder data still rely on compliance with PCI DSS to help protect other sensitive or confidential data or metadata. + +To learn more, refer to [PCI DSS Compliance in CockroachDB {{ site.data.products.advanced }} advanced]({% link cockroachcloud/pci-dss.md %}). + +## HIPAA + +The Health Insurance Portability and Accountability Act of 1996, commonly referred to as _HIPAA_, defines standards for the storage and handling of personally-identifiable information (PII) related to patient healthcare and health insurance (also referred to as Private Health Information, or PHI). - To learn more, refer to [PCI DSS Compliance in CockroachDB {{ site.data.products.dedicated }} advanced]({% link cockroachcloud/pci-dss.md %}). +When configured appropriately for [PCI DSS Compliance]({% link cockroachcloud/pci-dss.md %}), CockroachDB {{ site.data.products.advanced }} advanced on AWS and GCP also meets the requirements of HIPAA. CockroachDB {{ site.data.products.advanced }} on Azure is not yet certified for compliance with HIPAA. -- **Health Insurance Portability and Accountability Act (HIPAA)**: The Health Insurance Portability and Accountability Act of 1996, commonly referred to as _HIPAA_, defines standards for the storage and handling of personally-identifiable information (PII) related to patient healthcare and health insurance. When configured appropriately for [PCI DSS Compliance]({% link cockroachcloud/pci-dss.md %}), CockroachDB {{ site.data.products.dedicated }} advanced also meets the requirements of HIPAA. +## ISO 27001 and ISO 27017 -- **ISO 27001** and **ISO 27017**: ISO 27001 and ISO 27017 define international standards for managing information security. ISO 27001 is a general standard, and ISO 27017 is a standard specific to cloud service providers and environments. These standards are governed jointly by the [International Organization for Standardization (ISO)](https://www.iso.org/home.html) and the [International Electrotechnical Commission (IEC)](https://www.iec.ch/homepage). CockroachDB {{ site.data.products.dedicated }} meets the requirements of ISO 27001 and ISO 27017. +ISO 27001 and ISO 27017 define international standards for managing information security. ISO 27001 is a general standard, and ISO 27017 is a standard specific to cloud service providers and environments. These standards are governed jointly by the [International Organization for Standardization (ISO)](https://www.iso.org/home.html) and the [International Electrotechnical Commission (IEC)](https://www.iec.ch/homepage). CockroachDB {{ site.data.products.dedicated }} meets the requirements of ISO 27001 and ISO 27017. diff --git a/src/current/cockroachcloud/connect-to-an-advanced-cluster.md b/src/current/cockroachcloud/connect-to-an-advanced-cluster.md index ab61083402e..c20e970ea2a 100644 --- a/src/current/cockroachcloud/connect-to-an-advanced-cluster.md +++ b/src/current/cockroachcloud/connect-to-an-advanced-cluster.md @@ -38,21 +38,53 @@ Removing or adding an authorized network on your CockroachDB {{ site.data.produc ### Establish private connectivity -GCP VPC Peering and AWS PrivateLink allow customers to establish SQL access to their clusters entirely through cloud provider private infrastructure, without exposure to the public internet, affording enhanced security and performance. +Private connectivity allows you to establish SQL access to a CockroachDB {{ site.data.products.advanced }} cluster entirely through cloud provider private infrastructure, without exposing the cluster to the public internet, affording enhanced security and performance. -VPC peering is available only for GCP clusters, and AWS PrivateLink is available for AWS clusters. +- Clusters deployed on GCP can connect privately using [GCP Private Service Connect (PSC)](#gcp-private-service-connect). PSC allows you to connect your cluster directly to a VPC within your Google Cloud project. VPC Peering is not supported. +- Clusters deployed on AWS can connect privately using [AWS PrivateLink](#aws-privatelink), which allows you to connect your cluster to a VPC within your AWS account. +- Clusters deployed on Azure can connect privately using [Azure Private Link](#azure-private-link), which allows you to connect your cluster to a virtual network within your Azure tenant. -To configure VPC Peering or PrivateLink, you create the private connection in your cloud provider, then configure your cluster to allow connections from your VPC or private endpoint. For more information, refer to [Network Authorization for CockroachDB {{ site.data.products.advanced }} clusters: GCP VPC Peering]({% link cockroachcloud/network-authorization.md %}#vpc-peering) and [Network Authorization for CockroachDB {{ site.data.products.advanced }} clusters: AWS PrivateLink]({% link cockroachcloud/network-authorization.md %}#aws-privatelink). +For more information, refer to [Network authorization]({% link cockroachcloud/network-authorization.md %}). -AWS PrivateLink can be configured only after the cluster is created. For detailed instructions, refer to [Managing AWS PrivateLink for a cluster]({% link cockroachcloud/aws-privatelink.md %}). To configure VPC Peering, continue to the [VPC Peering](#vpc-peering) section below. - -Azure Private Link is not yet available for [CockroachDB {{ site.data.products.advanced }} on Azure]({% link cockroachcloud/cockroachdb-advanced-on-azure.md %}). +{{site.data.alerts.callout_success}} +Private connectivity can be configured only after a cluster is created. +{{site.data.alerts.end}} {{site.data.alerts.callout_info}} {% include cockroachcloud/cdc/kafka-vpc-limitation.md %} {{site.data.alerts.end}} -#### VPC Peering +#### GCP Private Service Connect + +{{site.data.alerts.callout_info}} +{% include_cached feature-phases/preview.md %} +{{site.data.alerts.end}} + +1. Navigate to your cluster's **Networking > Private endpoint** tab. +1. Click **Add a private endpoint**. Copy the value provided for **Target service**. Do not close this browser window. +1. In a new browser window, log in to Google Cloud Console, go to **Private Service Connect** section, and create a new endpoint in the same VPC as your application. For details, refer to [Create an endpoint](https://cloud.google.com/vpc/docs/configure-private-service-connect-services#create-endpoint) in the Google Cloud documentation. + - Set **Target** to **Published service**. + - Set **Target service** to the value you copied from CockroachDB {{ site.data.products.cloud }} Console. If the endpoint's configured target service does not match, validation will fail. + - Provide a value for **Endpoint name**. This is not used by CockroachDB {{ site.data.products.cloud }}. + - If it is not enabled, enable the Service Directory API, click **Enable global access**, and create a namespace in each region where your cluster is deployed. + - Click **Add endpoint**. + - After the endpoint is created, copy the connection ID. +1. Return to the CockroachDB {{ site.data.products.cloud }} Console browser tab and click **Validate**. +1. Enter the endpoint's ID, then click **Validate**. CockroachDB {{ site.data.products.cloud }} attempts to connect to the endpoint's VPC and verifies that the target service matches the cluster. If validation fails, verify the endpoint's configuration, then try again. After validation succeeds, click **Complete** to finish creating the connection. +1. On the **Networking > Private endpoint** tab, verify that the connection status is **Available**. + +{{site.data.alerts.callout_success}} +After validation succeeds for an endpoint, additional endpoints in the same VPC are automatically automatically accepted if they are configured with the cluster's target service ID. Additional VPCs must be added separately. +{{site.data.alerts.end}} + +If you remove the endpoint from GCP or change its target service, the endpoint will be removed from the cluster automatically. + +After the connection is established, you can use it to [connect to your cluster](#connect-to-your-cluster). + + +#### GCP VPC Peering + +For GKE, we recommend deploying your application to a VPC-native cluster that uses [alias IP addresses](https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips). If you are connecting from a [routes-based GKE cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/routes-based-cluster) instead, you must [export custom routes](https://cloud.google.com/vpc/docs/vpc-peering#importing-exporting-routes). CockroachDB {{ site.data.products.cloud }} will import your custom routes by default. 1. Navigate to your cluster's **Networking > VPC Peering** tab. 1. Click **Set up a VPC peering connection**. @@ -63,10 +95,55 @@ Azure Private Link is not yet available for [CockroachDB {{ site.data.products.a 1. Run the command displayed on the **Accept VPC peering connection request** window using [Google Cloud Shell](https://cloud.google.com/shell) or using the [gcloud command-line tool](https://cloud.google.com/sdk/gcloud). 1. On the **Networking** page, verify the connection status is **Available**. +After the connection is established, you can use it to [connect to your cluster](#connect-to-your-cluster). + +{{site.data.alerts.callout_info}} +Self-service VPC peering setup is not supported for CockroachDB {{ site.data.products.advanced }} clusters deployed before March 5, 2020. If your cluster was deployed before March 5, 2020, you will have to [create a new cluster]({% link cockroachcloud/create-your-cluster.md %}) with VPC peering enabled, then [export your data]({% link cockroachcloud/use-managed-service-backups.md %}) from the old cluster to the new cluster. If your cluster was deployed on or after March 5, 2020, it will be locked into CockroachDB {{ site.data.products.advanced }}'s default IP range (`172.28.0.0/14`) unless you explicitly configured a different IP range during cluster creation. +{{site.data.alerts.end}} + +#### AWS PrivateLink + +To establish an AWS PrivateLink connection, refer to [Managing AWS PrivateLink for a cluster]({% link cockroachcloud/aws-privatelink.md %}). After the connection is established, you can use it to [connect to your cluster](#connect-to-your-cluster). + +#### Azure Private Link + +{{site.data.alerts.callout_success}} +{% include_cached feature-phases/preview.md %} +{{site.data.alerts.end}} + +1. Navigate to your cluster's **Networking > Private endpoint** tab. +1. Click **Add a private endpoint**. Copy the value provided for **Alias**. Do not close this browser window. +1. In a new browser window, log in to Azure Console and create a new private endpoint for your cluster. + - Set the connection method to “by resource ID or alias”. + - Set the resource ID to the **Alias** you previously copied. For details, refer to [Create a private endpoint](https://learn.microsoft.com//azure/private-link/create-private-endpoint-portal?tabs=dynamic-ip) in the Azure documentation. + + After the private endpoint is created, view it, then click **Properties** and copy its Resource ID. + + {{site.data.alerts.callout_info}} + Copy the resource ID for the private endpoint you just created, not for the Private Link resource itself. + {{site.data.alerts.end}} + + Do not close this browser window. +1. Return to the CockroachDB {{ site.data.products.cloud }} Console browser tab and click **Next**. +1. Paste the resource ID for the Azure private endpoint, then click **Validate**. If validation fails, verify the resource ID and try again. If you encounter the error `This resource is invalid`, be sure that you are using the resource ID for the Azure private endpoint, rather than the resource ID for Azure Private Link itself. + + When validation succeeds, click **Next** to configure private DNS. Make a note of the Internal DNS Name. Do not close this browser window. +1. Return to the Azure Console. Go to the **Private DNS Zone** page and create private DNS records for your cluster in the` region where you will connect privately. + - Create a private DNS zone named with the Internal DNS Name you previously copied. Refer to [Quickstart: Create an Azure private DNS zone using the Azure portal](https://learn.microsoft.com/azure/dns/private-dns-getstarted-portal). + - In the new DNS zone, create an `@` record with the Internal DNS Name you previously copied. + - Click **Complete** to finish creating the DNS records. +1. Associate the new DNS zone with the private endpoint's virtual network. View the private endpoint's configuration, click **Virtual network links**, then click **Add**. + - Name the link, then select the resource group and select the DNS zone you just created. + - Enable auto-registration. + - Click **OK**. + + For details, refer to [Link the virtual network](https://learn.microsoft.com/azure/dns/private-dns-getstarted-portal#link-the-virtual-network). +1. Return to the CockroachDB {{ site.data.products.cloud }} Console browser tab and click **Complete**. +1. On the **Networking** page, verify the connection status is **Available**. ## Connect to your cluster -1. In the top right corner of the CockroachDB {{ site.data.products.cloud }} Console, click the **Connect** button. +1. In the top right corner of the CockroachDB {{ site.data.products.cloud }} Console, click **Connect**. The **Setup** page of the **Connect to cluster** dialog displays. @@ -78,7 +155,7 @@ Azure Private Link is not yet available for [CockroachDB {{ site.data.products.a {{site.data.alerts.end}} 1. Select the **Database**. If you have only one database, it is automatically selected. -1. For a multiregion cluster, select the **Region** to connect to. If you have only one region, it is automatically selected. +1. For a multi-region cluster, select the **Region** to connect to. If you have only one region, it is automatically selected. 1. Click **Next**. The **Connect** page of the **Connection info** dialog displays. diff --git a/src/current/cockroachcloud/connect-to-your-cluster.md b/src/current/cockroachcloud/connect-to-your-cluster.md index d6202fe3e57..bc264dc99df 100644 --- a/src/current/cockroachcloud/connect-to-your-cluster.md +++ b/src/current/cockroachcloud/connect-to-your-cluster.md @@ -17,11 +17,11 @@ This page shows you how to connect to your CockroachDB {{ site.data.products.sta ## Authorize your network -By default, CockroachDB {{ site.data.products.standard }} clusters are locked down to all network access. You must authorized certain network connections in order to allow SQL clients to connect to your clusters. {{ site.data.products.standard }} clusters can accept connections via two types of authorized network: +By default, CockroachDB {{ site.data.products.standard }} clusters are locked down to all network access. You must authorized certain network connections in order to allow SQL clients to connect to your clusters. {{ site.data.products.standard }} clusters can accept connections via two types of authorized network: - Allowed IP address ranges on the internet. - Cloud-provider-specific peer networking options: - - Google Cloud Platform (GCP) VPC Peering or Private Service Connect (Preview) + - Google Cloud Platform (GCP) Private Service Connect (Preview) - Amazon Web Services (AWS) Privatelink {{site.data.alerts.callout_info}} @@ -37,18 +37,36 @@ Private connectivity allows you to establish SQL access to a CockroachDB {{ site - Clusters deployed on GCP can connect privately using [GCP Private Service Connect (PSC)](#gcp-private-service-connect) or [GCP VPC peering](#gcp-vpc-peering). PSC allows you to connect your cluster directly to a VPC within your Google Cloud project, while VPC Peering allows you to peer your cluster's VPC in CockroachDB {{ site.data.products.cloud }} to a VPC within your Google Cloud project. - Clusters deployed on AWS can connect privately using [AWS PrivateLink](#aws-privatelink), which allows you to connect your cluster to a VPC within your AWS account. -- Clusters deployed on Azure can connect privately using [Azure Private Link](#azure-private-link), which allows you to connect your cluster to a virtual network within your Azure tenant. + +CockroachDB {{ site.data.products.standard }} is not yet available on Azure. For more information, refer to [Network authorization]({% link cockroachcloud/network-authorization.md %}). {{site.data.alerts.callout_success}} -GCP Private Service Connect, AWS PrivateLink, and Azure Private Link can be configured only after a cluster is created. +Private connectivity cannot be configured during cluster creation. {{site.data.alerts.end}} {{site.data.alerts.callout_info}} {% include cockroachcloud/cdc/kafka-vpc-limitation.md %} {{site.data.alerts.end}} + +#### GCP VPC Peering + +For GKE, we recommend deploying your application to a VPC-native cluster that uses [alias IP addresses](https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips). If you are connecting from a [routes-based GKE cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/routes-based-cluster) instead, you must [export custom routes](https://cloud.google.com/vpc/docs/vpc-peering#importing-exporting-routes). CockroachDB {{ site.data.products.cloud }} will import your custom routes by default. + +
+ + + +
+ +After the connection is established, you can use it to [connect to your cluster](#connect-to-your-cluster). + +{{site.data.alerts.callout_info}} +Self-service VPC peering setup is not supported for CockroachDB {{ site.data.products.dedicated }} clusters deployed before March 5, 2020. If your cluster was deployed before March 5, 2020, you will have to [create a new cluster]({% link cockroachcloud/create-your-cluster.md %}) with VPC peering enabled, then [export your data]({% link cockroachcloud/use-managed-service-backups.md %}) from the old cluster to the new cluster. If your cluster was deployed on or after March 5, 2020, it will be locked into CockroachDB's default IP range (`172.28.0.0/14`) unless you explicitly configured a different IP range during cluster creation. +{{site.data.alerts.end}} + #### GCP Private Service Connect {{site.data.alerts.callout_info}} @@ -76,23 +94,6 @@ If you remove the endpoint from GCP or change its target service, the endpoint w After the connection is established, you can use it to [connect to your cluster](#connect-to-your-cluster). - -#### GCP VPC Peering - -For GKE, we recommend deploying your application to a VPC-native cluster that uses [alias IP addresses](https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips). If you are connecting from a [routes-based GKE cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/routes-based-cluster) instead, you must [export custom routes](https://cloud.google.com/vpc/docs/vpc-peering#importing-exporting-routes). CockroachDB {{ site.data.products.cloud }} will import your custom routes by default. - -
- - - -
- -After the connection is established, you can use it to [connect to your cluster](#connect-to-your-cluster). - -{{site.data.alerts.callout_info}} -Self-service VPC peering setup is not supported for CockroachDB {{ site.data.products.dedicated }} clusters deployed before March 5, 2020. If your cluster was deployed before March 5, 2020, you will have to [create a new cluster]({% link cockroachcloud/create-your-cluster.md %}) with VPC peering enabled, then [export your data]({% link cockroachcloud/use-managed-service-backups.md %}) from the old cluster to the new cluster. If your cluster was deployed on or after March 5, 2020, it will be locked into CockroachDB {{ site.data.products.dedicated }}'s default IP range (`172.28.0.0/14`) unless you explicitly configured a different IP range during cluster creation. -{{site.data.alerts.end}} - #### AWS PrivateLink To establish an AWS PrivateLink connection, refer to [Managing AWS PrivateLink for a cluster]({% link cockroachcloud/aws-privatelink.md %}). After the connection is established, you can use it to [connect to your cluster](#connect-to-your-cluster). @@ -141,6 +142,9 @@ To establish an AWS PrivateLink connection, refer to [Managing AWS PrivateLink f If you forget your SQL user's password, an [Org Administrator]({% link cockroachcloud/authorization.md %}#org-administrator) or a Cluster Admin on the cluster can change the password on the **SQL Users** page. {{site.data.alerts.end}} +1. In the top right corner of the CockroachDB {{ site.data.products.cloud }} Console, click **Connect**. + + The **Setup** page of the **Connect to cluster** dialog displays. 1. If you have set up a private connection, select it to connect privately. Otherwise, click **IP Allowlist**. 1. Select the **SQL User**. If you have only one SQL user, it is automatically selected. diff --git a/src/current/cockroachcloud/create-a-basic-cluster.md b/src/current/cockroachcloud/create-a-basic-cluster.md index 293f2c072e6..a1ed19e7197 100644 --- a/src/current/cockroachcloud/create-a-basic-cluster.md +++ b/src/current/cockroachcloud/create-a-basic-cluster.md @@ -10,7 +10,7 @@ cloud: true This page guides you through the process of creating a cluster using CockroachDB {{ site.data.products.basic }}. Note that only [CockroachDB {{ site.data.products.cloud }} Org Administrators]({% link cockroachcloud/authorization.md %}#org-administrator) or users with Cluster Creator / Cluster Admin roles assigned at organization scope can create clusters. If you are a Developer and need to create a cluster, contact your CockroachDB {{ site.data.products.cloud }} Administrator. -New CockroachDB {{ site.data.products.serverless }} clusters always use the latest stable version of CockroachDB, and are automatically [upgraded]({% link cockroachcloud/upgrade-to-{{ site.current_cloud_version }}.md %}) to new patch versions, as well as new major versions, to maintain uninterrupted support and SLA guarantees. For more details, refer to [CockroachDB Cloud Upgrade Policy]({% link cockroachcloud/upgrade-policy.md %}). +New CockroachDB {{ site.data.products.basic }} clusters always use the latest stable version of CockroachDB, and are automatically [upgraded]({% link cockroachcloud/upgrade-to-{{ site.current_cloud_version }}.md %}) to new patch versions, as well as new major versions, to maintain uninterrupted support and SLA guarantees. For more details, refer to [CockroachDB Cloud Upgrade Policy]({% link cockroachcloud/upgrade-policy.md %}). ## Before you begin @@ -20,19 +20,17 @@ If you haven't already, sign up for a CockroachDB {{ site.data.products.cloud }} account, then [log in](https://cockroachlabs.cloud/). {% include cockroachcloud/prefer-sso.md %} -1. If there are multiple organizations in your account, select the correct organization in the top right corner. -1. On the **Overview** page, click **Create Cluster**. +1. If there are multiple [organizations](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/architecture/glossary#organization) in your account, verify the one that is selected in the top right corner. +1. On the **Clusters** page, click **Create Cluster** or, if you also have permission to create folders, then click **Create > Create Cluster**. 1. On the **Select a plan** page, select **Basic**. ## Step 2. Select the cloud provider -On the **Cloud & Regions** page, select a cloud provider (GCP or AWS) in the **Cloud provider** section. Creating a {{ site.data.products.basic }} cluster on Azure is not supported. +On the **Cloud & Regions** page, select a cloud provider (GCP or AWS) in the **Cloud provider** section. {{ site.data.products.basic }} is not supported on Azure. -{{site.data.alerts.callout_info}} -You do not need an account with the cloud provider you choose in order to create a cluster on that cloud provider. The cluster is created on infrastructure managed by Cockroach Labs. If you have existing cloud services on either GCP or AWS that you intend to use with your CockroachDB {{ site.data.products.basic }} cluster, you should select that cloud provider and the region closest to your existing cloud services to maximize performance. -{{site.data.alerts.end}} +You do not need an account in the deployment environment you choose. The cluster is created on infrastructure managed by Cockroach Labs. For optimal performance, create your cluster on the cloud provider and in the regions that best align with your existing cloud services. ## Step 3. Select the regions @@ -43,7 +41,7 @@ For optimal performance, select the cloud provider and region nearest to where y To create a multi-region cluster, click **Add regions** and select additional regions. A cluster can have at most six regions. {{site.data.alerts.callout_info}} -You cannot currently remove regions once they have been added. +You cannot remove a region. {{site.data.alerts.end}} After creating a multi-region cluster deployed on AWS, you can optionally [set up AWS PrivateLink (Limited Access)]({% link cockroachcloud/network-authorization.md %}#aws-privatelink) so that incoming connections to your cluster from applications or services running in your AWS account flow over private AWS network infrastructure rather than the public internet. @@ -56,7 +54,7 @@ Click **Next: Capacity**. Your cluster's capacity dictates its resource limits, which are the maximum amount of storage and RUs you can use in a month. If you reach your storage limit, your cluster will be throttled and you may only be able to delete data. If you reach your RU limit, your cluster will be disabled until the end of the billing cycle unless you raise the limit. -All CockroachDB {{ site.data.products.cloud }} organizations get 50M RUs and 10 GiB of storage for free each month. Free resources can be spent across all CockroachDB {{ site.data.products.basic }} clusters in an organization. You can set higher resource limits to maintain a high level of performance with larger workloads. You will only be charged for what you use. +Each CockroachDB {{ site.data.products.cloud }} organization gets 50M RUs and 10 GiB of storage for free each month. Free resources can be spent across all CockroachDB {{ site.data.products.basic }} clusters in an organization. You can set higher resource limits to maintain a high level of performance with larger workloads. You will only be charged for what you use. {% include cockroachcloud/basic-usage.md %} For more information, see [Planning your cluster]({% link cockroachcloud/plan-your-cluster.md %}). @@ -67,10 +65,10 @@ All CockroachDB {{ site.data.products.cloud }} organizations get 50M RUs and 10
-1. On the **Capacity** page, select the **Start for free** option. +1. On the **Capacity** page, select **Start for free**. {{site.data.alerts.callout_info}} - This will only be available if you haven't already created a free CockroachDB {{ site.data.products.basic }} cluster or set up billing information. + This will be available only if you haven't already created a free CockroachDB {{ site.data.products.basic }} cluster or set up billing information. {{site.data.alerts.end}} 1. Click **Next: Finalize**. @@ -79,11 +77,11 @@ All CockroachDB {{ site.data.products.cloud }} organizations get 50M RUs and 10
-1. On the **Capacity** page, if the option to **Start for free** is still available to you, select **Upgrade your capacity** instead. +1. On the **Capacity** page, select **Upgrade your capacity**, even if the option to **Start for free** is also available. 1. Configure **On-Demand capacity**. - - If you select **Unlimited**, your cluster will scale to meet your application's needs. You will only be charged for the resources you use. - - If you select **Set a monthly limit**, you can set storage and RU limits individually, or enter a dollar amount that will be split automatically between both resources. You will only be charged for the resources you use. + - **Unlimited**: your cluster will scale to meet your application's needs. You will only be charged for the resources you use. + - **Set a monthly limit**: you can set storage and RU limits individually, or enter a dollar amount that will be split automatically between both resources. You will only be charged for the resources you use. 1. Click **Next: Finalize**. @@ -108,7 +106,7 @@ Click **Create cluster**. Your cluster will be created in a few seconds. ## What's next - [Connect to your CockroachDB {{ site.data.products.basic }} cluster]({% link cockroachcloud/connect-to-your-cluster.md %}) -- [Authorize users]({% link cockroachcloud/managing-access.md %}) +- [Manage access]({% link cockroachcloud/managing-access.md %}) - [Learn CockroachDB SQL]({% link cockroachcloud/learn-cockroachdb-sql.md %}). - Explore our [example apps]({% link {{site.current_cloud_version}}/example-apps.md %}) for examples on how to build applications using your preferred driver or ORM and run it on CockroachDB. - [Migrate your existing data]({% link {{site.current_cloud_version}}/migration-overview.md %}). diff --git a/src/current/cockroachcloud/create-an-advanced-cluster.md b/src/current/cockroachcloud/create-an-advanced-cluster.md index 563a4479898..868179a3036 100644 --- a/src/current/cockroachcloud/create-an-advanced-cluster.md +++ b/src/current/cockroachcloud/create-an-advanced-cluster.md @@ -9,22 +9,25 @@ docs_area: deploy This page guides you through the process of creating a CockroachDB {{ site.data.products.advanced }} cluster using the [Cloud Console](httrps://cockroachlabs.cloud). To use the Cloud API instead, refer to [Create a New Cluster]({% link cockroachcloud/cloud-api.md %}#create-a-new-cluster). -Only [CockroachDB {{ site.data.products.cloud }} Org Administrators]({% link cockroachcloud/authorization.md %}#org-administrator) or users with Cluster Creator / Cluster Admin roles assigned at organization scope can create clusters. If you are a Developer and need to create a cluster, contact your CockroachDB {{ site.data.products.cloud }} Administrator. +Only [CockroachDB {{ site.data.products.cloud }} Org Administrators]({% link cockroachcloud/authorization.md %}#org-administrator) or users with Cluster Creator / Cluster Admin roles assigned at organization scope can create clusters. If you need permission to create a cluster, contact an CockroachDB {{ site.data.products.cloud }} Org Administrator. ## Step 1. Start the cluster creation process -1. If you haven't already, sign up for a CockroachDB {{ site.data.products.cloud }} account. +1. If you haven't already, sign up for a CockroachDB {{ site.data.products.cloud }} account, then [log in](https://cockroachlabs.cloud/). {% include cockroachcloud/prefer-sso.md %} -1. [Log in](https://cockroachlabs.cloud/) to your CockroachDB {{ site.data.products.cloud }} account. -1. If there are multiple [organizations](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/architecture/glossary#organization) in your account, select the organization where the cluster will be created from the selector in the top right corner. -1. On the **Overview** page, click **Create Cluster**. +1. If there are multiple [organizations](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/architecture/glossary#organization) in your account, verify the one that is selected in the top right corner. +1. On the **Clusters** page, click **Create Cluster** or, if you also have permission to create folders, then click **Create > Create Cluster**. 1. On the **Select a plan** page, select the **Advanced** plan. ## Step 2. Select the cloud provider On the **Cloud & Regions page**, go to the **Cloud provider** section and select your deployment environment: **Google Cloud**, **AWS**, or **Microsoft Azure**. -You do not need an account in the deployment environment you choose. The cluster is created on infrastructure managed by Cockroach Labs. If you intend to use your CockroachDB {{ site.data.products.advanced }} cluster with data or services in a cloud tenant that you manage, select that cloud provider and the region closest to your existing cloud services to maximize performance. +{{site.data.alerts.callout_info}} +For more details about CockroachDB {{ site.data.products.advanced }} on Azure, refer to [CockroachDB Advanced on Azure]({% link cockroachcloud/cockroachdb-advanced-on-azure.md %}). +{{site.data.alerts.end}} + +You do not need an account in the deployment environment you choose. The cluster is created on infrastructure managed by Cockroach Labs. For optimal performance, create your cluster on the cloud provider and in the regions that best align with your existing cloud services. {% include cockroachcloud/cockroachcloud-pricing.md %} @@ -32,23 +35,23 @@ You do not need an account in the deployment environment you choose. The cluster Select the region(s) and number of nodes for your cluster: -1. In the **Regions** section, select at minimum one region. Refer to [CockroachDB {{ site.data.products.cloud }} Regions]({% link cockroachcloud/regions.md %}) for the regions where CockroachDB {{ site.data.products.advanced }} clusters can be deployed. For optimal performance, select the cloud provider region in which you are running your application. For example, if your application is deployed in GCP's `us-east1` region, select `us-east1` for your CockroachDB {{ site.data.products.advanced }} cluster. +1. In the **Regions** section, select at minimum one region. Refer to [CockroachDB {{ site.data.products.cloud }} Regions]({% link cockroachcloud/regions.md %}) for the regions where CockroachDB {{ site.data.products.advanced }} clusters can be deployed. For optimal performance, create your cluster on the cloud provider and in the regions that best align with your existing cloud services. For example, if your application is deployed in GCP's `us-east1` region, select `us-east1` for your CockroachDB {{ site.data.products.advanced }} cluster. A multi-region cluster requires at minimum three regions and can survive the loss of a single region. Refer to [Planning your cluster](plan-your-cluster-advanced.html?filters=advanced) for the configuration requirements and recommendations for CockroachDB {{ site.data.products.advanced }} clusters. 1. Select the number of nodes: - - For single-region production deployments, we recommend a minimum of 3 nodes. The number of nodes also depends on your storage capacity and performance requirements. See [Example]({% link cockroachcloud/plan-your-cluster-advanced.md %}#example) for further guidance. - - For multi-region deployments, we require a minimum of 3 nodes per region. For best performance and stability, you should use the same number of nodes in each region. - - For single-region application development and testing, you may create a single-node cluster. + - For single-region production deployments, we recommend a minimum of 3 nodes. The number of nodes indirectly impacts Your cluster's storage and compute capacity scale with the number of nodes. Refer to [Plan your cluster]({% link cockroachcloud/plan-your-cluster-advanced.md %}#example). + - A multi-region deployment requires a minimum of 3 nodes per region. For best performance and stability, we recommend configuring the same number of nodes in each region. + - Single-node clusters are supported only for application development and testing, and are not available on Azure. -Refer to [Plan a CockroachDB Cloud cluster](plan-your-cluster-advanced.html) for the requirements and recommendations for CockroachDB {{ site.data.products.advanced }} cluster configuration. +Refer to [Plan a CockroachDB Advanced cluster](plan-your-cluster-advanced.html) for details. {% include cockroachcloud/nodes-limitation.md %} -Currently, you can add a maximum of 150 nodes to your cluster. For larger configurations, [contact your Cockroach Labs account team](https://support.cockroachlabs.com/hc/requests/new). +You can add a maximum of 150 nodes to your cluster. To express interest in larger configurations, [contact your Cockroach Labs account team](https://support.cockroachlabs.com/hc/requests/new). Click **Next: Capacity**. - +{% comment %}VPC peering status pending ## Step 4. Enable VPC Peering (optional) You can use [VPC peering]({% link cockroachcloud/network-authorization.md %}#vpc-peering) to connect a GCP application to a CockroachDB {{ site.data.products.cloud }} cluster deployed on GCP. A separate VPC Peering connection is required for each cluster. @@ -72,12 +75,13 @@ You can use CockroachDB {{ site.data.products.cloud }}'s default IP range and si After your cluster is created, you can [establish VPC Peering or AWS PrivateLink]({% link cockroachcloud/connect-to-an-advanced-cluster.md %}#establish-private-connectivity). If you don't want to enable VPC Peering, leave the default selection of **Use the default IP range** as is and click **Next: Capacity**. +{% endcomment %} -## Step 5. Configure cluster capacity +## Step 4. Configure cluster capacity -{% capture cap_per_vcpu %}{% include_cached v23.1/prod-deployment/provision-storage.md %}{% endcapture %} +{% capture cap_per_vcpu %}{% include_cached {{ site.current_cloud_version }}/prod-deployment/provision-storage.md %}{% endcapture %} -The choice of hardware per node determines the [cost](#step-2-select-the-cloud-provider), throughput, and performance characteristics of your cluster. +The choice of hardware per node determines the [cost](#step-2-select-the-cloud-provider), throughput, and performance characteristics of your cluster. Refer to [Plan your {{ site.data.products.advanced }}cluster]({% link cockroachcloud/plan-your-cluster-advanced.md %}#example). 1. On the **Capacity** page, select the **Compute per node**. @@ -106,33 +110,27 @@ The choice of hardware per node determines the [cost](#step-2-select-the-cloud-p Buffer | Additional buffer (overhead data, accounting for data growth, etc.). If you are importing an existing dataset, we recommend you provision at least 50% additional storage to account for the import functionality. Compression | The percentage of savings you can expect to achieve with compression. With CockroachDB's default compression algorithm, we typically see about a 40% savings on raw data size. - For more details about disk performance on a given cloud provider, refer to: - -To change the hardware configuration after the cluster is created, see [Manage a CockroachDB {{ site.data.products.advanced }} Cluster]({% link cockroachcloud/cluster-management.md %}). - -Refer to [Plan your cluster]({% link cockroachcloud/plan-your-cluster-advanced.md %}#example) for examples and further guidance. + For more details about disk performance, refer to: +After your cluster is created, refer to: +- [Manage a CockroachDB {{ site.data.products.advanced }} Cluster]({% link cockroachcloud/cluster-management.md %}) +- [Establish private connectivity]({% link cockroachcloud/connect-to-an-advanced-cluster.md %}#establish-private-connectivity) Click **Next: Security**. ## Step 6. Configure advanced security features -You can enable advanced security features for PCI DSS and HIPAA [compliance]({% link cockroachcloud/compliance.md %}) at an additional cost. +You can enable advanced security features for PCI DSS and HIPAA [compliance]({% link cockroachcloud/compliance.md %}) at an additional cost. These features are not yet available for CockroachDB {{ site.data.products.advanced }} on Azure. Refer to [CockroachDB {{ site.data.products.advanced }} on Azure]({% link cockroachcloud/cockroachdb-advanced-on-azure.md %}). {{site.data.alerts.callout_danger}} - This configuration cannot be changed after cluster creation. + Advanced security features cannot be enabled or disabled after cluster creation. {{site.data.alerts.end}} ## Step 7. Enter billing details -1. On the **Finalize** page, verify your selections for the cloud provider, region(s), number of nodes, and the capacity. - - Once your cluster is created, you can [establish VPC Peering or AWS PrivateLink]({% link cockroachcloud/connect-to-an-advanced-cluster.md %}#establish-private-connectivity). - -1. Verify the hourly estimated cost for the cluster. The cost displayed does not include taxes. - - You will be billed monthly. - +1. On the **Finalize** page, verify: + - Your cluster's cloud provider, regions, and configuration. + - The hourly estimated cost for the cluster. The cost displayed does not include taxes. You will be billed monthly. 1. Add your preferred [payment method]({% link cockroachcloud/billing-management.md %}). 1. If applicable, the 30-day trial code is pre-applied to your cluster. @@ -171,29 +169,9 @@ Click **Create cluster**. Your cluster will be created in approximately 20-30 mi ## What's next -To start using your CockroachDB {{ site.data.products.cloud }} cluster, see the following pages: +To start using your CockroachDB {{ site.data.products.advanced }} cluster, refer to: - [Connect to your cluster]({% link cockroachcloud/connect-to-your-cluster.md %}) -- [Authorize users]({% link cockroachcloud/managing-access.md %}) +- [Manage access]({% link cockroachcloud/managing-access.md %}) - [Deploy a Python To-Do App with Flask, Kubernetes, and CockroachDB {{ site.data.products.cloud }}]({% link cockroachcloud/deploy-a-python-to-do-app-with-flask-kubernetes-and-cockroachcloud.md %}) - -If you created a multi-region cluster, it is important to carefully choose: - -- The most appropriate [survival goal](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/multiregion-survival-goals) for each database. -- The most appropriate [table locality](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/table-localities) for each of your tables. - -Otherwise, your cluster may experience unexpected latency and reduced resiliency. For more information, refer to [Multi-Region Capabilities Overview]({% link {{ site.current_cloud_version}}/multiregion-overview.md %}). - -{% comment %} -### [WIP] Select hardware configuration based on performance requirements - -Let's say we want to run a TPC-C workload with 500 warehouses on a CockroachDB {{ site.data.products.cloud }} cluster. - -One TPC-C `warehouse` is about 200MB of data. CockroachDB can handle approximately 45 warehouses per vCPU. So a 4 vCPU node can handle 180 warehouses which is 36GB of unreplicated raw data. - -With a default replication factor of 3, the total amount of data we need to store is (3 * 36GB) = 108GB of data. - -So for a workload resembling TPC-C, we want to build out your cluster with `Option 2` nodes, and you'll only use 1/3 of the storage. - - -{% endcomment %} +- For a multi-region cluster, it is important to choose the most appropriate [survival goal]({% link {{site.current_cloud_version}}/multiregion-survival-goals.md %}) for each database and the most appropriate [table locality]({% link {{site.current_cloud_version}}/table-localities.md %}) for each table. Otherwise, your cluster may experience unexpected latency and reduced resiliency. For more information, refer to [Multi-Region Capabilities Overview]({% link {{ site.current_cloud_version}}/multiregion-overview.md %}). diff --git a/src/current/cockroachcloud/create-your-cluster.md b/src/current/cockroachcloud/create-your-cluster.md index a1b52abcf1c..336cee09b7a 100644 --- a/src/current/cockroachcloud/create-your-cluster.md +++ b/src/current/cockroachcloud/create-your-cluster.md @@ -9,7 +9,7 @@ docs_area: deploy This page guides you through the process of creating a CockroachDB {{ site.data.products.standard }} cluster using the [Cloud Console](httrps://cockroachlabs.cloud). To use the Cloud API instead, refer to [Create a New Cluster]({% link cockroachcloud/cloud-api.md %}#create-a-new-cluster). -Only [CockroachDB {{ site.data.products.cloud }} Org Administrators]({% link cockroachcloud/authorization.md %}#org-administrator) or users with Cluster Creator / Cluster Admin roles assigned at organization scope can create clusters. If you need to create a cluster and do not have one of the required roles, contact your CockroachDB {{ site.data.products.cloud }} Administrator. +If you need permission to create a cluster, contact an CockroachDB {{ site.data.products.cloud }} Org Administrator. {{site.data.alerts.callout_success}} To create and connect to a 30-day free CockroachDB {{ site.data.products.standard }} cluster and run your first query, refer to the [Quickstart]({% link cockroachcloud/quickstart-trial-cluster.md %}). @@ -17,7 +17,7 @@ To create and connect to a 30-day free CockroachDB {{ site.data.products.standar ## Step 1. Start the cluster creation process -1. If you haven't already, sign up for a CockroachDB {{ site.data.products.cloud }} account. +1. If you haven't already, sign up for a CockroachDB {{ site.data.products.cloud }} account, then [log in](https://cockroachlabs.cloud/). {% include cockroachcloud/prefer-sso.md %} 1. [Log in](https://cockroachlabs.cloud/) to your CockroachDB {{ site.data.products.cloud }} account. 1. If there are multiple organizations in your account, select the correct organization in the top right corner. @@ -28,7 +28,7 @@ To create and connect to a 30-day free CockroachDB {{ site.data.products.standar On the **Cloud & Regions page**, in the **Cloud provider** section, select your deployment environment: **Google Cloud** or **AWS**. Creating a CockroachDB {{ site.data.products.standard }} cluster on Azure is not yet supported. -You do not need an account in the deployment environment you choose. The cluster is created on infrastructure managed by Cockroach Labs. If you intend to use your CockroachDB {{ site.data.products.standard }} cluster with data or services in a cloud tenant that you manage, you should select that cloud provider and the region closest to your existing cloud services to maximize performance. +You do not need an account in the deployment environment you choose. The cluster is created on infrastructure managed by Cockroach Labs. For optimal performance, create your cluster on the cloud provider and in the regions that best align with your existing cloud services. Pricing depends on your cloud provider and region selections. @@ -38,17 +38,17 @@ Pricing depends on your cloud provider and region selections. In the **Regions** section, select at least one region. Refer to [CockroachDB {{ site.data.products.cloud }} Regions]({% link cockroachcloud/regions.md %}) for the regions where CockroachDB {{ site.data.products.standard }} clusters can be deployed. -For optimal performance, select the cloud provider region nearest to the region where you are running your application. For example, if your application is deployed in GCP's `us-east1` region, create your cluster on GCP and select `us-east1` for your CockroachDB {{ site.data.products.standard }} cluster. +For optimal performance, create your cluster on the cloud provider and in the regions that best align with your existing cloud services. For example, if your application is deployed in GCP's `us-east1` region, create your cluster on GCP and select `us-east1` for your CockroachDB {{ site.data.products.standard }} cluster. A multi-region cluster can survive the loss of a single region. For multi-region clusters, CockroachDB will optimize access to data from the primary region. Refer to [Planning your cluster](plan-your-cluster.html) for the configuration requirements and recommendations for CockroachDB {{ site.data.products.standard }} clusters. {{site.data.alerts.callout_info}} -You cannot remove regions once they have been added. +You cannot remove a region. {{site.data.alerts.end}} After creating a multi-region cluster deployed on AWS, you can optionally [set up AWS PrivateLink (Limited Access)]({% link cockroachcloud/network-authorization.md %}#aws-privatelink) so that incoming connections to your cluster from applications or services running in your AWS account flow over private AWS network infrastructure rather than the public internet. -Private connectivity is not available for {{ site.data.products.serverless }} clusters on GCP. +Private connectivity is not available for {{ site.data.products.standard }} clusters on GCP. Click **Next: Capacity**. @@ -56,8 +56,9 @@ Click **Next: Capacity**. Provisioned capacity refers to the processing resources (Request Units per sec) reserved for your workload. Each 500 RUs/sec equals approximately 1 vCPU. We recommend setting capacity at least 40% above expected peak workload to avoid performance issues. Refer to [Planning your cluster](plan-your-cluster.html) for the configuration requirements and recommendations for CockroachDB {{ site.data.products.standard }} clusters. +{% comment %}Verify VPC Peering status {{site.data.alerts.callout_success}} -You can [set up private connectivity]({% link cockroachcloud/connect-to-your-cluster.md %}#gcp-private-service-connect) after creating your cluster. +You can [set up private connectivity]({% link cockroachcloud/connect-to-your-cluster.md %}#establish-private-connectivity) after creating your cluster. {{site.data.alerts.end}} You can use CockroachDB {{ site.data.products.cloud }}'s default IP range and size (`172.28.0.0/14`) as long as it doesn't overlap with the IP ranges in your network. Alternatively, you can configure the IP range: @@ -72,18 +73,17 @@ You can use CockroachDB {{ site.data.products.cloud }}'s default IP range and si Custom IP ranges are temporarily unavailable for multi-region clusters. {{site.data.alerts.end}} - After your cluster is created, refer to [Establish private connectivity]({% link cockroachcloud/connect-to-your-cluster.md %}#gcp-vpc-peering) to finish setting up VPC Peering for your cluster. + After your cluster is created, refer to [Establish private connectivity]({% link cockroachcloud/connect-to-your-cluster.md %}#gcp-vpc-peering) to finish setting up VPC Peering for your cluster.{% endcomment %} Click **Next: Finalize**. ## Step 5. Enter billing details -1. On the **Finalize** page, verify your selections for the cloud provider, region(s), and the capacity. -1. Verify the hourly estimated cost for the cluster. The cost displayed does not include taxes. +1. On the **Finalize** page, verify: + - Your cluster's cloud provider, regions, and configuration. + - The hourly estimated cost for the cluster. The cost displayed does not include taxes. You will be billed monthly. - You will be billed monthly. - -1. If you have not yet configured billing for your CockroachDB {{ site.data.products.cloud }} organization, add your preferred [payment method]({% link cockroachcloud/billing-management.md %}). +1. Add your preferred [payment method]({% link cockroachcloud/billing-management.md %}). 1. If applicable, the 30-day trial code is pre-applied to your cluster. {{site.data.alerts.callout_info}} Make sure that you [delete your trial cluster]({% link cockroachcloud/cluster-management.md %}#delete-cluster) before the trial expires. Your credit card will be charged after the trial ends. You can check the validity of the code on the [Billing]({% link cockroachcloud/billing-management.md %}) page. @@ -97,16 +97,14 @@ The cluster is automatically given a randomly-generated name. If desired, change Click **Create cluster**. Your cluster will be created in a few seconds. -{% comment %}Commented out until this is in the Cloud 2.0 UI - -## Step 8. Select the CockroachDB version +{% comment %}## Step 8. Select the CockroachDB version -When you create a new CockroachDB {{ site.data.products.dedicated }} cluster, it defaults to using the [latest CockroachDB {{ site.data.products.cloud }} production release]({% link releases/cloud.md %}) unless you select a release explicitly. Releases are rolled out gradually to CockroachDB {{ site.data.products.cloud }}. At any given time, you may be able to choose among multiple releases. In the list: +When you create a new CockroachDB {{ site.data.products.standard }} cluster, it defaults to using the [latest CockroachDB {{ site.data.products.cloud }} production release]({% link releases/cloud.md %}) unless you select a release explicitly. Releases are rolled out gradually to CockroachDB {{ site.data.products.cloud }}. At any given time, you may be able to choose among multiple releases. In the list: - **No label**: The latest patch of a Regular [Production release]({% link cockroachcloud/upgrade-policy.md %}) that is not the latest. A Regular release has full support for one year from the release date, at which a cluster must be [upgraded]({% link cockroachcloud/upgrade-policy.md %}) to maintain support. - **Latest**: The latest patch of the latest regular [Production release]({% link cockroachcloud/upgrade-policy.md %}). This is the default version for new clusters. - **Innovation Release**: The latest patch of an [Innovation release]({% link cockroachcloud/upgrade-policy.md %}). Innovation releases are optional releases that provide earlier access to new features, and are released between regular releases. An Innovation release has full support for six months from the release date, at which time a cluster must be [upgraded]({% link cockroachcloud/upgrade-policy.md %}) to the next Regular release to maintain support. -- **Pre-Production Preview**: A [Pre-Production Preview]({% link cockroachcloud/upgrade-policy.md %}#pre-production-preview-upgrades). Leading up to a new CockroachDB Regular [Production release]({% link cockroachcloud/upgrade-policy.md %}), a series of Beta and Release Candidate (RC) patches may be made available for CockroachDB {{ site.data.products.dedicated }} as Pre-Production Preview releases. Pre-Production Preview releases are not suitable for production environments. They are no longer available in CockroachDB {{ site.data.products.cloud }} for new clusters or upgrades after the new version is GA. When the GA release is available, a cluster running a Pre-Production Preview is automatically upgraded to the GA release and subsequent patches and is eligible for support. +- **Pre-Production Preview**: A [Pre-Production Preview]({% link cockroachcloud/upgrade-policy.md %}#pre-production-preview-upgrades). Leading up to a new CockroachDB Regular [Production release]({% link cockroachcloud/upgrade-policy.md %}), a series of Beta and Release Candidate (RC) patches may be made available for CockroachDB {{ site.data.products.standard }} as Pre-Production Preview releases. Pre-Production Preview releases are not suitable for production environments. They are no longer available in CockroachDB {{ site.data.products.cloud }} for new clusters or upgrades after the new version is GA. When the GA release is available, a cluster running a Pre-Production Preview is automatically upgraded to the GA release and subsequent patches and is eligible for support. 1. To choose a version for your cluster, select the cluster version from the **Cluster version** list. @@ -121,23 +119,9 @@ Click **Create cluster**. Your cluster will be created in approximately 20-30 mi ## What's next -To start using your CockroachDB {{ site.data.products.dedicated }} cluster, refer to: +To start using your CockroachDB {{ site.data.products.standard }} cluster, refer to: - [Connect to your cluster]({% link cockroachcloud/connect-to-your-cluster.md %}) - [Authorize users]({% link cockroachcloud/managing-access.md %}) - [Deploy a Python To-Do App with Flask, Kubernetes, and CockroachDB {{ site.data.products.cloud }}]({% link cockroachcloud/deploy-a-python-to-do-app-with-flask-kubernetes-and-cockroachcloud.md %}) -- For [multi-region clusters]({% link {{ site.current_cloud_version}}/multiregion-overview.md %}), learn how to reduce latency and increase resiliency by choosing the best [survival goal]({% link {{site.current_cloud_version}}/multiregion-survival-goals.md %}) for each database and the best [table locality]({% link {{site.current_cloud_version}}/table-localities.md %}) for each table. - -{% comment %} -### [WIP] Select hardware configuration based on performance requirements - -Let's say we want to run a TPC-C workload with 500 warehouses on a CockroachDB {{ site.data.products.cloud }} cluster. - -One TPC-C `warehouse` is about 200MB of data. CockroachDB can handle approximately 45 warehouses per vCPU. So a 4 vCPU node can handle 180 warehouses which is 36GB of unreplicated raw data. - -With a default replication factor of 3, the total amount of data we need to store is (3 * 36GB) = 108GB of data. - -So for a workload resembling TPC-C, we want to build out your cluster with `Option 2` nodes, and you'll only use 1/3 of the storage. - - -{% endcomment %} +- For a multi-region cluster, it is important to choose the most appropriate [survival goal]({% link {{site.current_cloud_version}}/multiregion-survival-goals.md %}) for each database and the most appropriate [table locality]({% link {{site.current_cloud_version}}/table-localities.md %}) for each table. Otherwise, your cluster may experience unexpected latency and reduced resiliency. For more information, refer to [Multi-Region Capabilities Overview]({% link {{ site.current_cloud_version}}/multiregion-overview.md %}). diff --git a/src/current/v21.2/security-reference/security-overview.md b/src/current/v21.2/security-reference/security-overview.md index c962255341b..4bf2eed5315 100644 --- a/src/current/v21.2/security-reference/security-overview.md +++ b/src/current/v21.2/security-reference/security-overview.md @@ -158,7 +158,7 @@ CockroachDB {{ site.data.products.core }} here refers to the situation of a user ✓ ✓ ✓ - VPC Peering for GCP clusters and AWS PrivateLink for AWS clusters + VPC Peering for GCP clusters and AWS PrivateLink for AWS clusters Non-Repudiation diff --git a/src/current/v22.1/security-reference/security-overview.md b/src/current/v22.1/security-reference/security-overview.md index 92999405a9d..94936277ada 100644 --- a/src/current/v22.1/security-reference/security-overview.md +++ b/src/current/v22.1/security-reference/security-overview.md @@ -169,7 +169,7 @@ CockroachDB {{ site.data.products.core }} here refers to the situation of a user ✓ ✓ ✓ - VPC Peering for GCP clusters and AWS PrivateLink for AWS clusters + VPC Peering for GCP clusters and AWS PrivateLink for AWS clusters Non-Repudiation diff --git a/src/current/v22.2/security-reference/security-overview.md b/src/current/v22.2/security-reference/security-overview.md index febb6e70b3f..bcac4d1fcf0 100644 --- a/src/current/v22.2/security-reference/security-overview.md +++ b/src/current/v22.2/security-reference/security-overview.md @@ -169,7 +169,7 @@ CockroachDB {{ site.data.products.core }} here refers to the situation of a user ✓ ✓ ✓ - VPC Peering for GCP clusters and AWS PrivateLink for AWS clusters + VPC Peering for GCP clusters and AWS PrivateLink for AWS clusters Non-Repudiation diff --git a/src/current/v23.1/deploy-cockroachdb-with-kubernetes.md b/src/current/v23.1/deploy-cockroachdb-with-kubernetes.md index 89b0f7f11d2..71dc9df4ce2 100644 --- a/src/current/v23.1/deploy-cockroachdb-with-kubernetes.md +++ b/src/current/v23.1/deploy-cockroachdb-with-kubernetes.md @@ -23,6 +23,7 @@ This page shows you how to start and stop a secure 3-node CockroachDB cluster in {% include cockroachcloud/use-cockroachcloud-instead.md %} + ## Limitations {% include {{ page.version.version }}/orchestration/kubernetes-limitations.md %} @@ -38,7 +39,7 @@ Choose how you want to deploy and maintain the CockroachDB cluster. {{site.data.alerts.callout_info}} The [CockroachDB Kubernetes Operator](https://github.com/cockroachdb/cockroach-operator) eases CockroachDB cluster creation and management on a single Kubernetes cluster. -Note that the Operator does not provision or apply an Enterprise license key. To use [Enterprise features]({% link {{ page.version.version }}/enterprise-licensing.md %}) with the Operator, [set a license]({% link {{ page.version.version }}/licensing-faqs.md %}#set-a-license) in the SQL shell. +The Operator does not provision or apply an Enterprise license key. To use [Enterprise features]({% link {{ page.version.version }}/enterprise-licensing.md %}) with the Operator, [set a license]({% link {{ page.version.version }}/licensing-faqs.md %}#set-a-license) in the SQL shell. {{site.data.alerts.end}}
@@ -70,7 +71,7 @@ Note that the Operator does not provision or apply an Enterprise license key. To ## Step 5. Stop the cluster {{site.data.alerts.callout_info}} -If you want to continue using this cluster, see the documentation on [configuring]({% link {{ page.version.version }}/configure-cockroachdb-kubernetes.md %}), [scaling]({% link {{ page.version.version }}/scale-cockroachdb-kubernetes.md %}), [monitoring]({% link {{ page.version.version }}/monitor-cockroachdb-kubernetes.md %}), and [upgrading]({% link {{ page.version.version }}/upgrade-cockroachdb-kubernetes.md %}) the cluster. +If you want to continue using this cluster, refer the documentation on [configuring]({% link {{ page.version.version }}/configure-cockroachdb-kubernetes.md %}), [scaling]({% link {{ page.version.version }}/scale-cockroachdb-kubernetes.md %}), [monitoring]({% link {{ page.version.version }}/monitor-cockroachdb-kubernetes.md %}), and [upgrading]({% link {{ page.version.version }}/upgrade-cockroachdb-kubernetes.md %}) the cluster. {{site.data.alerts.end}} {% include {{ page.version.version }}/orchestration/kubernetes-stop-cluster.md %} diff --git a/src/current/v23.1/node-shutdown.md b/src/current/v23.1/node-shutdown.md index 3b63002abc9..4c549df1edc 100644 --- a/src/current/v23.1/node-shutdown.md +++ b/src/current/v23.1/node-shutdown.md @@ -9,24 +9,24 @@ A node **shutdown** terminates the `cockroach` process on the node. There are two ways to handle node shutdown: -- **Drain a node** to temporarily stop it when you plan to restart it later, such as during cluster maintenance. When you drain a node: +- **Drain a node** to temporarily stop it when you plan restart it later, such as during cluster maintenance. When you drain a node: - Clients are disconnected, and subsequent connection requests are sent to other nodes. - - The node's [data store]({% link {{ page.version.version }}/cockroach-start.md %}#store) is preserved and will be reused as long as the node restarts in a short time. Otherwise, the node's data is moved to other nodes. + - The node's data store is preserved and will be reused as long as the node restarts in a short time. Otherwise, the node's data is moved to other nodes. - After the node is drained, you can terminate the `cockroach` process, perform maintenance, then restart it. CockroachDB automatically drains a node when [upgrading its cluster version]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}). Draining a node is lightweight because it generates little node-to-node traffic across the cluster. + After the node is drained, you can manually terminate the `cockroach` process to perform maintenance, then restart the process for the node to rejoin the cluster. {% include_cached new-in.html version="v24.2" %}The `--shutdown` flag of [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags) automatically terminates the `cockroach` process after draining completes. A node is also automatically drained when [upgrading its major version]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}). Draining a node is lightweight because it generates little node-to-node traffic across the cluster. - **Decommission a node** to permanently remove it from the cluster, such as when scaling down the cluster or to replace the node due to hardware failure. During decommission: - The node is drained automatically if you have not manually drained it. - - The node's data is moved off the node to other nodes. This [replica rebalancing]({% link {{ page.version.version }}/architecture/replication-layer.md %}#membership-changes-rebalance-repair) generates a large amount of node-to-node network traffic, so decommissioning a node is considered a heavyweight operation. + - The node's data is moved off the node to other nodes. This [replica rebalancing]({% link {{ page.version.version }}/architecture/glossary.md %}#replica) generates a large amount of node-to-node network traffic, so decommissioning a node is considered a heavyweight operation. This page describes: - The details of the [node shutdown sequence](#node-shutdown-sequence) from the point of view of the `cockroach` process on a CockroachDB node. - How to [prepare for graceful shutdown](#prepare-for-graceful-shutdown) on CockroachDB {{ site.data.products.core }} clusters by coordinating load balancer, client application server, process manager, and cluster settings. - How to [perform node shutdown](#perform-node-shutdown) on CockroachDB {{ site.data.products.core }} deployments by manually draining or decommissioning a node. -- How to handle node shutdown when CockroachDB is deployed using [Kubernetes](#decommissioning-and-draining-on-kubernetes) or in a [CockroachDB {{ site.data.products.dedicated }} cluster](#decommissioning-and-draining-on-cockroachdb-dedicated). +- How to handle node shutdown when CockroachDB is deployed using [Kubernetes](#decommissioning-and-draining-on-kubernetes) or in a [CockroachDB {{ site.data.products.advanced }} cluster](#decommissioning-and-draining-on-cockroachdb-advanced). {{site.data.alerts.callout_success}} -This guidance applies to primarily to manual deployments. For more details about graceful termination when CockroachDB is deployed using Kubernetes, refer to [Decommissioning and draining on Kubernetes](#decommissioning-and-draining-on-kubernetes). For more details about graceful termination in a CockroachDB {{ site.data.products.dedicated }} cluster, refer to [Decommissioning and draining on CockroachDB {{ site.data.products.dedicated }}](#decommissioning-and-draining-on-cockroachdb-dedicated). +This guidance applies to primarily to manual deployments. For more details about graceful termination when CockroachDB is deployed using Kubernetes, refer to [Decommissioning and draining on Kubernetes](#decommissioning-and-draining-on-kubernetes). For more details about graceful termination in a CockroachDB {{ site.data.products.advanced }} cluster, refer to [Decommissioning and draining on CockroachDB {{ site.data.products.advanced }}](#decommissioning-and-draining-on-cockroachdb-advanced). {{site.data.alerts.end}}
@@ -62,12 +62,16 @@ After this stage, the node is automatically drained. However, to avoid possible
-An operator [initiates the draining process](#drain-the-node-and-terminate-the-node-process) on the node. Draining a node disconnects clients after active queries are completed, and transfers any [range leases]{% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) and [Raft leaderships]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft) to other nodes, but does not move replicas or data off of the node. When draining is complete, you can send a `SIGTERM` signal to the `cockroach` process to shut it down, perform the required maintenance, and then restart the `cockroach` process on the node. +An operator [initiates the draining process](#drain-the-node-and-terminate-the-node-process) on the node. Draining a node disconnects clients after active queries are completed, and transfers any [range leases]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) and [Raft leaderships]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft) to other nodes, but does not move replicas or data off of the node. + +When draining is complete, the node must be shut down prior to any maintenance. After a 60-second wait at minimum, you can send a `SIGTERM` signal to the `cockroach` process to shut it down. {% include_cached new-in.html version="v24.2" %}The `--shutdown` flag of [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags) automatically terminates the `cockroach` process after draining completes. + +After you perform the required maintenance, you can restart the `cockroach` process on the node for it to rejoin the cluster. {% capture drain_early_termination_warning %}Do not terminate the `cockroach` process before all of the phases of draining are complete. Otherwise, you may experience latency spikes until the [leases]({% link {{ page.version.version }}/architecture/glossary.md %}#leaseholder) that were on that node have transitioned to other nodes. It is safe to terminate the `cockroach` process only after a node has completed the drain process. This is especially important in a containerized system, to allow all TCP connections to terminate gracefully.{% endcapture %} {{site.data.alerts.callout_danger}} -{{ drain_early_termination_warning }} If necessary, adjust the [`server.shutdown.drain_wait`](#server-shutdown-drain_wait) cluster setting and the [termination grace period]({% link {{ page.version.version }}/node-shutdown.md %}?filters=decommission#termination-grace-period) and adjust your process manager or other deployment tooling to allow adequate time for the node to finish draining before it is terminated or restarted. +{{ drain_early_termination_warning }} If necessary, adjust the [`server.shutdown.initial_wait`](#server-shutdown-initial_wait) and the [termination grace period]({% link {{ page.version.version}}/node-shutdown.md %}?filters=decommission#termination-grace-period) cluster settings and adjust your process manager or other deployment tooling to allow adequate time for the node to finish draining before it is terminated or restarted. {{site.data.alerts.end}}
@@ -78,22 +82,22 @@ After all replicas on a decommissioning node are rebalanced, the node is automat Node drain consists of the following consecutive phases: -1. **Unready phase:** The node's [`/health?ready=1` endpoint]({% link {{ page.version.version }}/monitoring-and-alerting.md %}#health-ready-1) returns an HTTP `503 Service Unavailable` response code, which causes load balancers and connection managers to reroute traffic to other nodes. This phase completes when the [fixed duration set by `server.shutdown.drain_wait`](#server-shutdown-drain_wait) is reached. +1. **Unready phase:** The node's [`/health?ready=1` endpoint]({% link {{ page.version.version }}/monitoring-and-alerting.md %}#health-ready-1) returns an HTTP `503 Service Unavailable` response code, which causes load balancers and connection managers to reroute traffic to other nodes. This phase completes when the [fixed duration set by `server.shutdown.initial_wait`](#server-shutdown-initial_wait) is reached. -1. **SQL wait phase:** New SQL client connections are no longer permitted, and any remaining SQL client connections are allowed to close or time out. This phase completes either when all SQL client connections are closed or the [maximum duration set by `server.shutdown.connection_wait`](#server-shutdown-connection_wait) is reached. +1. **SQL wait phase:** New SQL client connections are no longer permitted, and any remaining SQL client connections are allowed to close or time out. This phase completes either when all SQL client connections are closed or the [maximum duration set by `server.shutdown.connections.timeout`](#server-shutdown-connections-timeout) is reached. -1. **SQL drain phase:** All active transactions and statements for which the node is a [gateway]({% link {{ page.version.version }}/architecture/life-of-a-distributed-transaction.md %}#gateway) are allowed to complete, and CockroachDB closes the SQL client connections immediately afterward. After this phase completes, CockroachDB closes all remaining SQL client connections to the node. This phase completes either when all transactions have been processed or the [maximum duration set by `server.shutdown.query_wait`](#server-shutdown-query_wait) is reached. +1. **SQL drain phase:** All active transactions and statements for which the node is a [gateway]({% link {{ page.version.version }}/architecture/life-of-a-distributed-transaction.md %}#gateway) are allowed to complete, and CockroachDB closes the SQL client connections immediately afterward. After this phase completes, CockroachDB closes all remaining SQL client connections to the node. This phase completes either when all transactions have been processed or the [maximum duration set by `server.shutdown.transactions.timeout`](#server-shutdown-transactions-timeout) is reached. -1. **DistSQL drain phase**: All [distributed statements]({% link {{ page.version.version }}/architecture/sql-layer.md %}#distsql) initiated on other gateway nodes are allowed to complete, and DistSQL requests from other nodes are no longer accepted. This phase completes either when all transactions have been processed or the [maximum duration set by `server.shutdown.query_wait`](#server-shutdown-query_wait) is reached. +1. **DistSQL drain phase**: All [distributed statements]({% link {{ page.version.version }}/architecture/sql-layer.md %}#distsql) initiated on other gateway nodes are allowed to complete, and DistSQL requests from other nodes are no longer accepted. This phase completes either when all transactions have been processed or the [maximum duration set by `server.shutdown.transactions.timeout`](#server-shutdown-transactions-timeout) is reached. 1. **Lease transfer phase:** The node's [`is_draining`]({% link {{ page.version.version }}/cockroach-node.md %}#node-status) field is set to `true`, which removes the node as a candidate for replica rebalancing, lease transfers, and query planning. Any [range leases]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) or [Raft leaderships]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft) must be transferred to other nodes. This phase completes when all range leases and Raft leaderships have been transferred.
- Since all [range replicas]({% link {{ page.version.version }}/architecture/glossary.md %}#replica) were already removed from the node during the [draining](#draining) stage, this step immediately resolves. + Since all range replicas were already removed from the node during the [draining](#draining) stage, this step immediately resolves.
-When [draining manually](#drain-a-node-manually), if the above steps have not completed after [`server.shutdown.drain_wait`](#server-shutdown-drain_wait), node draining will stop and must be restarted manually to continue. For more information, see [Drain timeout](#drain-timeout). +When [draining manually](#drain-a-node-manually), if the above steps have not completed after [`server.shutdown.initial_wait`](#server-shutdown-initial_wait), node draining will stop and must be restarted manually to continue. For more information, see [Drain timeout](#drain-timeout).
@@ -109,14 +113,17 @@ At this point, it is safe to terminate the `cockroach` process manually or using #### Process termination
-After draining and decommissioning are complete, an operator [terminates the node process](#terminate-the-node-process). +After draining and decommissioning are complete, an operator [terminates the node process](?filters=decommission#terminate-the-node-process).
After draining is complete: - If the node was drained automatically because the `cockroach` process received a `SIGTERM` signal, the `cockroach` process is automatically terminated when draining is complete. -- If the node was drained manually because an operator issued a `cockroach node drain` command, the `cockroach` process must be terminated manually. A minimum of 60 seconds after draining is complete, send it a `SIGTERM` signal to terminate it. Refer to [Terminate the node process](#drain-the-node-and-terminate-the-node-process). +- If the node was drained manually because an operator issued a `cockroach node drain` command: + - {% include_cached new-in.html version="v24.2" %}If you pass the `--shutdown` flag to [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags), the `cockroach` process terminates automatically after draining completes. + - If the node's major version is being updated, the `cockroach` process terminates automatically after draining completes. + - Otherwise, the `cockroach` process must be terminated manually. A minimum of 60 seconds after draining is complete, send it a `SIGTERM` signal to terminate it. Refer to [Terminate the node process](#drain-the-node-and-terminate-the-node-process).
@@ -155,43 +162,52 @@ Before you [perform node shutdown](#perform-node-shutdown), review the following Your [load balancer]({% link {{ page.version.version }}/recommended-production-settings.md %}#load-balancing) should use the [`/health?ready=1` endpoint]({% link {{ page.version.version }}/monitoring-and-alerting.md %}#health-ready-1) to actively monitor node health and direct SQL client connections away from draining nodes. -To handle node shutdown effectively, the load balancer must be given enough time by the [`server.shutdown.drain_wait` duration](#server-shutdown-drain_wait). +To handle node shutdown effectively, the load balancer must be given enough time by the [`server.shutdown.initial_wait` duration](#server-shutdown-initial_wait). ### Cluster settings -#### `server.shutdown.drain_wait` +#### `server.shutdown.initial_wait` + -`server.shutdown.drain_wait` sets a **fixed** duration for the ["unready phase"](#draining) of node drain. Because a load balancer reroutes connections to non-draining nodes within this duration (`0s` by default), this setting should be coordinated with the load balancer settings. +Alias: `server.shutdown.drain_wait` -Increase `server.shutdown.drain_wait` so that your load balancer is able to make adjustments before this phase times out. Because the drain process waits unconditionally for the `server.shutdown.drain_wait` duration, do not set this value too high. +`server.shutdown.initial_wait` sets a **fixed** duration for the ["unready phase"](#draining) of node drain. Because a load balancer reroutes connections to non-draining nodes within this duration (`0s` by default), this setting should be coordinated with the load balancer settings. -For example, [HAProxy]({% link {{ page.version.version }}/cockroach-gen.md %}#generate-an-haproxy-config-file) uses the default settings `inter 2000 fall 3` when checking server health. This means that HAProxy considers a node to be down (and temporarily removes the server from the pool) after 3 unsuccessful health checks being run at intervals of 2000 milliseconds. To ensure HAProxy can run 3 consecutive checks before timeout, set `server.shutdown.drain_wait` to `8s` or greater: +Increase `server.shutdown.initial_wait` so that your load balancer is able to make adjustments before this phase times out. Because the drain process waits unconditionally for the `server.shutdown.initial_wait` duration, do not set this value too high. + +For example, [HAProxy]({% link {{ page.version.version }}/cockroach-gen.md %}#generate-an-haproxy-config-file) uses the default settings `inter 2000 fall 3` when checking server health. This means that HAProxy considers a node to be down (and temporarily removes the server from the pool) after 3 unsuccessful health checks being run at intervals of 2000 milliseconds. To ensure HAProxy can run 3 consecutive checks before timeout, set `server.shutdown.initial_wait` to `8s` or greater: {% include_cached copy-clipboard.html %} ~~~ sql -SET CLUSTER SETTING server.shutdown.drain_wait = '8s'; +SET CLUSTER SETTING server.shutdown.initial_wait = '8s'; ~~~ -#### `server.shutdown.connection_wait` +#### `server.shutdown.connections.timeout` + + +Alias: `server.shutdown.connection_wait` -`server.shutdown.connection_wait` sets the **maximum** duration for the ["connection phase"](#draining) of node drain. SQL client connections are allowed to close or time out within this duration (`0s` by default). This setting presents an option to gracefully close the connections before CockroachDB forcibly closes those that remain after the ["SQL drain phase"](#draining). +`server.shutdown.connections.timeout` sets the **maximum** duration for the ["connection phase"](#draining) of node drain. SQL client connections are allowed to close or time out within this duration (`0s` by default). This setting presents an option to gracefully close the connections before CockroachDB forcibly closes those that remain after the ["SQL drain phase"](#draining). Change this setting **only** if you cannot tolerate connection errors during node drain and cannot configure the maximum lifetime of SQL client connections, which is usually configurable via a [connection pool]({% link {{ page.version.version }}/connection-pooling.md %}#about-connection-pools). Depending on your requirements: -- Lower the maximum lifetime of a SQL client connection in the pool. This will cause more frequent reconnections. Set `server.shutdown.connection_wait` above this value. -- If you cannot tolerate more frequent reconnections, do not change the SQL client connection lifetime. Instead, use a longer `server.shutdown.connection_wait`. This will cause a longer draining process. +- Lower the maximum lifetime of a SQL client connection in the pool. This will cause more frequent reconnections. Set `server.shutdown.connections.timeout` above this value. +- If you cannot tolerate more frequent reconnections, do not change the SQL client connection lifetime. Instead, use a longer `server.shutdown.connections.timeout`. This will cause a longer draining process. -#### `server.shutdown.query_wait` +#### `server.shutdown.transactions.timeout` + -`server.shutdown.query_wait` sets the **maximum** duration for the ["SQL drain phase"](#draining) and the **maximum** duration for the ["DistSQL drain phase"](#draining) of node drain. Active local and distributed queries must complete, in turn, within this duration (`10s` by default). +Alias: `server.shutdown.query_wait` -Ensure that `server.shutdown.query_wait` is greater than: +`server.shutdown.transactions.timeout` sets the **maximum** duration for the ["SQL drain phase"](#draining) and the **maximum** duration for the ["DistSQL drain phase"](#draining) of node drain. Active local and distributed queries must complete, in turn, within this duration (`10s` by default). + +Ensure that `server.shutdown.transactions.timeout` is greater than: - The longest possible transaction in the workload that is expected to complete successfully. - The `sql.defaults.idle_in_transaction_session_timeout` cluster setting, which controls the duration a session is permitted to idle in a transaction before the session is terminated (`0s` by default). - The `sql.defaults.statement_timeout` cluster setting, which controls the duration a query is permitted to run before it is canceled (`0s` by default). -`server.shutdown.query_wait` defines the upper bound of the duration, meaning that node drain proceeds to the next phase as soon as the last open transaction completes. +`server.shutdown.transactions.timeout` defines the upper bound of the duration, meaning that node drain proceeds to the next phase as soon as the last open transaction completes. {{site.data.alerts.callout_success}} If there are still open transactions on the draining node when the server closes its connections, you will encounter errors. You may need to adjust your application server's [connection pool]({% link {{ page.version.version }}/connection-pooling.md %}#about-connection-pools) settings. @@ -199,12 +215,15 @@ If there are still open transactions on the draining node when the server closes {% include {{page.version.version}}/sql/sql-defaults-cluster-settings-deprecation-notice.md %} -#### `server.shutdown.lease_transfer_wait` +#### `server.shutdown.lease_transfer_iteration.timeout` + + +Alias: `server.shutdown.lease_transfer_wait` -In the ["lease transfer phase"](#draining) of node drain, the server attempts to transfer all [range leases]{% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) and [Raft leaderships]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft) from the draining node. [`server.shutdown.lease_transfer_wait`]({% link {{ page.version.version }}/cluster-settings.md %}#setting-server-shutdown-lease-transfer-wait) sets the maximum duration of each iteration of this attempt. Because this phase does not exit until all transfers are completed, changing this value affects only the frequency at which drain progress messages are printed. +In the ["lease transfer phase"](#draining) of node drain, the server attempts to transfer all [range leases]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) and [Raft leaderships]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft) from the draining node. `server.shutdown.lease_transfer_iteration.timeout` sets the maximum duration of each iteration of this attempt (`5s` by default). Because this phase does not exit until all transfers are completed, changing this value affects only the frequency at which drain progress messages are printed.
-In most cases, the default value is suitable. Do **not** set `server.shutdown.lease_transfer_wait` to a value lower than `5s`. In this case, leases can fail to transfer and node drain will not be able to complete. +In most cases, the default value is suitable. Do **not** set `server.shutdown.lease_transfer_iteration.timeout` to a value lower than `5s`. In this case, leases can fail to transfer and node drain will not be able to complete.
@@ -212,7 +231,7 @@ Since [decommissioning](#decommissioning) a node rebalances all of its range rep
{{site.data.alerts.callout_info}} -The sum of [`server.shutdown.drain_wait`](#server-shutdown-drain_wait), [`server.shutdown.connection_wait`](#server-shutdown-connection_wait), [`server.shutdown.query_wait`](#server-shutdown-query_wait) times two, and [`server.shutdown.lease_transfer_wait`](#server-shutdown-lease_transfer_wait) should not be greater than the configured [drain timeout](#drain-timeout). +The sum of [`server.shutdown.initial_wait`](#server-shutdown-connections-timeout), [`server.shutdown.connections.timeout`](#server-shutdown-connections-timeout), [`server.shutdown.transactions.timeout`](#server-shutdown-transactions-timeout) times two, and [`server.shutdown.lease_transfer_iteration.timeout`](#server-shutdown-lease_transfer_iteration-timeout) should not be greater than the configured [drain timeout](#drain-timeout). {{site.data.alerts.end}} #### `kv.allocator.recovery_store_selector` @@ -254,7 +273,7 @@ A very long drain may indicate an anomaly, and you should manually inspect the s CockroachDB automatically increases the verbosity of logging when it detects a stall in the range lease transfer stage of `node drain`. Messages logged during such a stall include the time an attempt occurred, the total duration stalled waiting for the transfer attempt to complete, and the lease that is being transferred. -`--drain-wait` sets the timeout for [all draining phases](#draining) and is **not** related to the `server.shutdown.drain_wait` cluster setting, which configures the "unready phase" of draining. The value of `--drain-wait` should be greater than the sum of [`server.shutdown.drain_wait`](#server-shutdown-drain_wait), [`server.shutdown.connection_wait`](#server-shutdown-connection_wait), [`server.shutdown.query_wait`](#server-shutdown-query_wait) times two, and [`server.shutdown.lease_transfer_wait`](#server-shutdown-lease_transfer_wait). +`--drain-wait` sets the timeout for [all draining phases](#draining) and is **not** related to the `server.shutdown.initial_wait` cluster setting, which configures the "unready phase" of draining. The value of `--drain-wait` should be greater than the sum of [`server.shutdown.initial_wait`](#server-shutdown-connections-timeout), [`server.shutdown.connections.timeout`](#server-shutdown-connections-timeout), [`server.shutdown.transactions.timeout`](#server-shutdown-transactions-timeout) times two, and [`server.shutdown.lease_transfer_iteration.timeout`](#server-shutdown-lease_transfer_iteration-timeout). ### Termination grace period @@ -292,25 +311,25 @@ This can lead to disk utilization imbalance across nodes. **This is expected beh In this scenario, each range is replicated 3 times, with each replica on a different node: -
Decommission Scenario 1
+
Decommission Scenario 1
If you try to decommission a node, the process will hang indefinitely because the cluster cannot move the decommissioning node's replicas to the other 2 nodes, which already have a replica of each range: -
Decommission Scenario 1
+
Decommission Scenario 1
To successfully decommission a node in this cluster, you need to **add a 4th node**. The decommissioning process can then complete: -
Decommission Scenario 1
+
Decommission Scenario 1
#### 5-node cluster with 3-way replication In this scenario, like in the scenario above, each range is replicated 3 times, with each replica on a different node: -
Decommission Scenario 1
+
Decommission Scenario 1
If you decommission a node, the process will run successfully because the cluster will be able to move the node's replicas to other nodes without doubling up any range replicas: -
Decommission Scenario 1
+
Decommission Scenario 1
@@ -325,7 +344,7 @@ After [preparing for graceful shutdown](#prepare-for-graceful-shutdown), do the
{{site.data.alerts.callout_success}} -This guidance applies to manual deployments. In a Kubernetes deployment or a CockroachDB {{ site.data.products.dedicated }} cluster, terminating the `cockroach` process is handled through Kubernetes. Refer to [Decommissioning and draining on Kubernetes](#decommissioning-and-draining-on-kubernetes) and [Decommissioning and draining on CockroachDB {{ site.data.products.dedicated }}](#decommissioning-and-draining-on-cockroachdb-dedicated). +This guidance applies to manual deployments. In a Kubernetes deployment or a CockroachDB {{ site.data.products.advanced }} cluster, terminating the `cockroach` process is handled through Kubernetes. Refer to [Decommissioning and draining on Kubernetes](#decommissioning-and-draining-on-kubernetes) and [Decommissioning and draining on CockroachDB {{ site.data.products.advanced }}](#decommissioning-and-draining-on-cockroachdb-advanced). {{site.data.alerts.end}}
@@ -351,6 +370,10 @@ Do **not** terminate the node process, delete the storage volume, or remove the
### Drain the node and terminate the node process +{% include_cached new-in.html version="v24.2" %}If you passed the `--shutdown` flag to [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags), the `cockroach` process terminates automatically after draining completes. Otherwise, terminate the `cockroach` process. + +Perform maintenance on the node as required, then restart the `cockroach` process for the node to rejoin the cluster. + {{site.data.alerts.callout_success}} To drain the node without process termination, see [Drain a node manually](#drain-a-node-manually). {{site.data.alerts.end}} @@ -364,7 +387,7 @@ After you initiate a node shutdown or restart, the node's progress is regularly ### `OPS` -During node shutdown, progress messages are generated in the [`OPS` logging channel]({% link {{ page.version.version }}/logging-overview.md %}#logging-channels). The frequency of these messages is configured with [`server.shutdown.lease_transfer_wait`](#server-shutdown-lease_transfer_wait). [By default]({% link {{ page.version.version }}/configure-logs.md %}#default-logging-configuration), the `OPS` logs output to a `cockroach.log` file. +During node shutdown, progress messages are generated in the [`OPS` logging channel]({% link {{ page.version.version }}/logging-overview.md %}#logging-channels). The frequency of these messages is configured with [`server.shutdown.lease_transfer_iteration.timeout`](#server-shutdown-lease_transfer_iteration-timeout). [By default]({% link {{ page.version.version }}/configure-logs.md %}#default-logging-configuration), the `OPS` logs output to a `cockroach.log` file.
Node decommission progress is reported in [`node_decommissioning`]({% link {{ page.version.version }}/eventlog.md %}#node_decommissioning) and [`node_decommissioned`]({% link {{ page.version.version }}/eventlog.md %}#node_decommissioned) events: @@ -540,7 +563,7 @@ To drain and shut down a node that was started in the foreground with [`cockroac You can use [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}) to drain a node separately from decommissioning the node or terminating the node process. -1. Run the `cockroach node drain` command, specifying the ID of the node to drain (and optionally a custom [drain timeout](#drain-timeout) to allow draining more time to complete): +1. Run the `cockroach node drain` command, specifying the ID of the node to drain (and optionally a custom [drain timeout](#drain-timeout) to allow draining more time to complete). {% include_cached new-in.html version="v24.2" %}You can optionally pass the `--shutdown` flag to [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags) to automatically terminate the `cockroach` process after draining completes. {% include_cached copy-clipboard.html %} ~~~ shell @@ -603,7 +626,7 @@ This example assumes you will decommission node IDs `4` and `5` of a 5-node clus #### Step 2. Drain the nodes manually -Run the [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}) command for each node to be removed, specifying the ID of the node to drain: +Run the [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}) command for each node to be removed, specifying the ID of the node to drain. {% include_cached new-in.html version="v24.2" %}Optionally, pass the `--shutdown` flag of [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags) to automatically terminate the `cockroach` process after draining completes. {% include_cached copy-clipboard.html %} ~~~ shell @@ -837,25 +860,25 @@ For clusters deployed using the CockroachDB Helm chart or a manual StatefulSet, Cockroach Labs recommends that you: - Set `terminationGracePeriodSeconds` to no shorter than 300 seconds (5 minutes). This recommendation has been validated over time for many production workloads. In most cases, a value higher than 300 seconds (5 minutes) is not required. If CockroachDB takes longer than 5 minutes to gracefully stop, this may indicate an underlying configuration problem. Test the value you select against representative workloads before rolling out the change to production clusters. -- Set `terminationGracePeriodSeconds` to be at least 5 seconds longer than the configured [drain timeout](#server-shutdown-drain_wait), to allow the node to complete draining before Kubernetes removes the Kubernetes pod for the CockroachDB node. +- Set `terminationGracePeriodSeconds` to be at least 5 seconds longer than the configured [drain timeout](#server-shutdown-connections-timeout), to allow the node to complete draining before Kubernetes removes the Kubernetes pod for the CockroachDB node. - Ensure that the **sum** of the following `server.shutdown.*` settings for the CockroachDB cluster do not exceed the deployment's `terminationGracePeriodSeconds`, to reduce the likelihood that a node must be terminated forcibly. - - [`server.shutdown.drain_wait`](#server-shutdown-drain_wait) - - [`server.shutdown.connection_wait`](#server-shutdown-connection_wait) - - [`server.shutdown.query_wait`](#server-shutdown-query_wait) times two - - [`server.shutdown.lease_transfer_wait`](#server-shutdown-lease_transfer_wait) + - [`server.shutdown.initial_wait`](#server-shutdown-initial_wait) + - [`server.shutdown.connections.timeout`](#server-shutdown-connections-timeout) + - [`server.shutdown.transactions.timeout`](#server-shutdown-transactions-timeout) times two + - [`server.shutdown.lease_transfer_iteration.timeout`](#server-shutdown-lease_transfer_iteration-timeout) For more information about these settings, refer to [Cluster settings](#cluster-settings). Refer also to the [Kubernetes documentation about pod termination](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination). - A client application's connection pool should have a maximum lifetime that is shorter than the Kubernetes deployment's [`server.shutdown.connection_wait`](#server-shutdown-connection_wait) setting. + A client application's connection pool should have a maximum lifetime that is shorter than the Kubernetes deployment's [`server.shutdown.connections.timeout`](#server-shutdown-connections-timeout) setting. - + -## Decommissioning and draining on CockroachDB {{ site.data.products.dedicated }} +## Decommissioning and draining on CockroachDB {{ site.data.products.advanced }} -Most of the guidance in this page is most relevant to manual deployments, although decommissioning and draining work the same way behind the scenes in a CockroachDB {{ site.data.products.dedicated }} cluster. CockroachDB {{ site.data.products.dedicated }} clusters have a `server.shutdown.connection_wait` of 1800 seconds (30 minutes) and a termination grace period that is slightly longer. The termination grace period is not configurable, and adjusting `server.shutdown.connection_wait` is generally not recommended. +Most of the guidance in this page is most relevant to manual deployments, although decommissioning and draining work the same way behind the scenes in a CockroachDB {{ site.data.products.advanced }} cluster. CockroachDB {{ site.data.products.advanced }} clusters have a `server.shutdown.connections.timeout` of 1800 seconds (30 minutes) and a termination grace period that is slightly longer. The termination grace period is not configurable, and adjusting `server.shutdown.connections.timeout` is generally not recommended. -Client applications or application servers that connect to CockroachDB {{ site.data.products.dedicated }} clusters should use connection pools that have a maximum lifetime that is shorter than the [`server.shutdown.connection_wait`](#server-shutdown-connection_wait) setting. +Client applications or application servers that connect to CockroachDB {{ site.data.products.advanced }} clusters should use connection pools that have a maximum lifetime that is shorter than the [`server.shutdown.connections.timeout`](#server-shutdown-connections-timeout) setting. ## See also diff --git a/src/current/v23.1/orchestrate-a-local-cluster-with-kubernetes.md b/src/current/v23.1/orchestrate-a-local-cluster-with-kubernetes.md index 4858b597d3e..3420bd25430 100644 --- a/src/current/v23.1/orchestrate-a-local-cluster-with-kubernetes.md +++ b/src/current/v23.1/orchestrate-a-local-cluster-with-kubernetes.md @@ -20,7 +20,6 @@ To orchestrate a physically distributed cluster in production, see [Orchestrated {% include {{ page.version.version }}/orchestration/kubernetes-limitations.md %} - {% include {{ page.version.version }}/orchestration/local-start-kubernetes.md %} ## Step 2. Start CockroachDB @@ -63,7 +62,7 @@ Choose a way to deploy and maintain the CockroachDB cluster: {% include_cached copy-clipboard.html %} ~~~ shell - $ minikube stop + minikube stop ~~~ ~~~ @@ -77,7 +76,7 @@ Choose a way to deploy and maintain the CockroachDB cluster: {% include_cached copy-clipboard.html %} ~~~ shell - $ minikube delete + minikube delete ~~~ ~~~ @@ -85,7 +84,9 @@ Choose a way to deploy and maintain the CockroachDB cluster: Machine deleted. ~~~ - {{site.data.alerts.callout_success}}To retain logs, copy them from each pod's stderr before deleting the cluster and all its resources. To access a pod's standard error stream, run kubectl logs <podname>.{{site.data.alerts.end}} + {{site.data.alerts.callout_success}} + To retain logs, copy them from each pod's `stderr` before deleting the cluster and all its resources. To access a pod's standard error stream, run `kubectl logs <podname>`. + {{site.data.alerts.end}} ## See also diff --git a/src/current/v23.1/simulate-a-multi-region-cluster-on-localhost.md b/src/current/v23.1/simulate-a-multi-region-cluster-on-localhost.md index 4a41c8d7442..14836ed353d 100644 --- a/src/current/v23.1/simulate-a-multi-region-cluster-on-localhost.md +++ b/src/current/v23.1/simulate-a-multi-region-cluster-on-localhost.md @@ -5,15 +5,15 @@ toc: true docs_area: deploy --- - Once you've [installed CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}), it's simple to simulate multi-region cluster on your local machine using [`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %}). This is a useful way to start playing with the [improved multi-region abstractions]({% link {{ page.version.version }}/multiregion-overview.md %}) provided by CockroachDB. + Once you've [installed CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}), you can simulate multi-region cluster on your local machine using [`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %})to learn about CockroachDB's [multi-region abstractions]({% link {{ page.version.version }}/multiregion-overview.md %}). {{site.data.alerts.callout_info}} -[`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %}) is not suitable for production deployments. Additionally, simulating multiple geographically distributed nodes on a single host is not representative of the [performance you should expect]({% link {{ page.version.version }}/frequently-asked-questions.md %}#single-row-perf) of a production deployment. For instructions showing how to do production multi-region deployments, see [Orchestrate CockroachDB Across Multiple Kubernetes Clusters]({% link {{ page.version.version }}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md %}) and [Deploy a Global, Serverless Application]({% link {{ page.version.version }}/movr-flask-deployment.md %}). Also be sure to review the [Production Checklist](recommended-production-settings.html). +[`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %}) is not suitable for production deployments. Additionally, simulating multiple geographically distributed nodes on a single host is not representative of the [performance you should expect]({% link {{ page.version.version }}/frequently-asked-questions.md %}#single-row-perf) in a production deployment. To learn more about production multi-region deployments, refer to [Orchestrate CockroachDB Across Multiple Kubernetes Clusters]({% link {{ page.version.version }}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md %}) and [Deploy a Global, Serverless Application]({% link {{ page.version.version }}/movr-flask-deployment.md %}), and review the [Production Checklist](recommended-production-settings.html). {{site.data.alerts.end}} ## Before you begin -- Make sure you have already [installed CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}). +[Download]({% link releases/index.md %}) and [Install CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}). ## Step 1. Start the cluster diff --git a/src/current/v23.2/deploy-cockroachdb-with-kubernetes.md b/src/current/v23.2/deploy-cockroachdb-with-kubernetes.md index 89b0f7f11d2..71dc9df4ce2 100644 --- a/src/current/v23.2/deploy-cockroachdb-with-kubernetes.md +++ b/src/current/v23.2/deploy-cockroachdb-with-kubernetes.md @@ -23,6 +23,7 @@ This page shows you how to start and stop a secure 3-node CockroachDB cluster in {% include cockroachcloud/use-cockroachcloud-instead.md %} + ## Limitations {% include {{ page.version.version }}/orchestration/kubernetes-limitations.md %} @@ -38,7 +39,7 @@ Choose how you want to deploy and maintain the CockroachDB cluster. {{site.data.alerts.callout_info}} The [CockroachDB Kubernetes Operator](https://github.com/cockroachdb/cockroach-operator) eases CockroachDB cluster creation and management on a single Kubernetes cluster. -Note that the Operator does not provision or apply an Enterprise license key. To use [Enterprise features]({% link {{ page.version.version }}/enterprise-licensing.md %}) with the Operator, [set a license]({% link {{ page.version.version }}/licensing-faqs.md %}#set-a-license) in the SQL shell. +The Operator does not provision or apply an Enterprise license key. To use [Enterprise features]({% link {{ page.version.version }}/enterprise-licensing.md %}) with the Operator, [set a license]({% link {{ page.version.version }}/licensing-faqs.md %}#set-a-license) in the SQL shell. {{site.data.alerts.end}}
@@ -70,7 +71,7 @@ Note that the Operator does not provision or apply an Enterprise license key. To ## Step 5. Stop the cluster {{site.data.alerts.callout_info}} -If you want to continue using this cluster, see the documentation on [configuring]({% link {{ page.version.version }}/configure-cockroachdb-kubernetes.md %}), [scaling]({% link {{ page.version.version }}/scale-cockroachdb-kubernetes.md %}), [monitoring]({% link {{ page.version.version }}/monitor-cockroachdb-kubernetes.md %}), and [upgrading]({% link {{ page.version.version }}/upgrade-cockroachdb-kubernetes.md %}) the cluster. +If you want to continue using this cluster, refer the documentation on [configuring]({% link {{ page.version.version }}/configure-cockroachdb-kubernetes.md %}), [scaling]({% link {{ page.version.version }}/scale-cockroachdb-kubernetes.md %}), [monitoring]({% link {{ page.version.version }}/monitor-cockroachdb-kubernetes.md %}), and [upgrading]({% link {{ page.version.version }}/upgrade-cockroachdb-kubernetes.md %}) the cluster. {{site.data.alerts.end}} {% include {{ page.version.version }}/orchestration/kubernetes-stop-cluster.md %} diff --git a/src/current/v23.2/node-shutdown.md b/src/current/v23.2/node-shutdown.md index 0921931b340..4c549df1edc 100644 --- a/src/current/v23.2/node-shutdown.md +++ b/src/current/v23.2/node-shutdown.md @@ -9,24 +9,24 @@ A node **shutdown** terminates the `cockroach` process on the node. There are two ways to handle node shutdown: -- **Drain a node** to temporarily stop it when you plan to restart it later, such as during cluster maintenance. When you drain a node: +- **Drain a node** to temporarily stop it when you plan restart it later, such as during cluster maintenance. When you drain a node: - Clients are disconnected, and subsequent connection requests are sent to other nodes. - - The node's [data store]({% link {{ page.version.version }}/cockroach-start.md %}#store) is preserved and will be reused as long as the node restarts in a short time. Otherwise, the node's data is moved to other nodes. + - The node's data store is preserved and will be reused as long as the node restarts in a short time. Otherwise, the node's data is moved to other nodes. - After the node is drained, you can terminate the `cockroach` process, perform maintenance, then restart it. CockroachDB automatically drains a node when [upgrading its cluster version]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}). Draining a node is lightweight because it generates little node-to-node traffic across the cluster. + After the node is drained, you can manually terminate the `cockroach` process to perform maintenance, then restart the process for the node to rejoin the cluster. {% include_cached new-in.html version="v24.2" %}The `--shutdown` flag of [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags) automatically terminates the `cockroach` process after draining completes. A node is also automatically drained when [upgrading its major version]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}). Draining a node is lightweight because it generates little node-to-node traffic across the cluster. - **Decommission a node** to permanently remove it from the cluster, such as when scaling down the cluster or to replace the node due to hardware failure. During decommission: - The node is drained automatically if you have not manually drained it. - - The node's data is moved off the node to other nodes. This [replica rebalancing]({% link {{ page.version.version }}/architecture/replication-layer.md %}#membership-changes-rebalance-repair) generates a large amount of node-to-node network traffic, so decommissioning a node is considered a heavyweight operation. + - The node's data is moved off the node to other nodes. This [replica rebalancing]({% link {{ page.version.version }}/architecture/glossary.md %}#replica) generates a large amount of node-to-node network traffic, so decommissioning a node is considered a heavyweight operation. This page describes: - The details of the [node shutdown sequence](#node-shutdown-sequence) from the point of view of the `cockroach` process on a CockroachDB node. - How to [prepare for graceful shutdown](#prepare-for-graceful-shutdown) on CockroachDB {{ site.data.products.core }} clusters by coordinating load balancer, client application server, process manager, and cluster settings. - How to [perform node shutdown](#perform-node-shutdown) on CockroachDB {{ site.data.products.core }} deployments by manually draining or decommissioning a node. -- How to handle node shutdown when CockroachDB is deployed using [Kubernetes](#decommissioning-and-draining-on-kubernetes) or in a [CockroachDB {{ site.data.products.dedicated }} cluster](#decommissioning-and-draining-on-cockroachdb-dedicated). +- How to handle node shutdown when CockroachDB is deployed using [Kubernetes](#decommissioning-and-draining-on-kubernetes) or in a [CockroachDB {{ site.data.products.advanced }} cluster](#decommissioning-and-draining-on-cockroachdb-advanced). {{site.data.alerts.callout_success}} -This guidance applies to primarily to manual deployments. For more details about graceful termination when CockroachDB is deployed using Kubernetes, refer to [Decommissioning and draining on Kubernetes](#decommissioning-and-draining-on-kubernetes). For more details about graceful termination in a CockroachDB {{ site.data.products.dedicated }} cluster, refer to [Decommissioning and draining on CockroachDB {{ site.data.products.dedicated }}](#decommissioning-and-draining-on-cockroachdb-dedicated). +This guidance applies to primarily to manual deployments. For more details about graceful termination when CockroachDB is deployed using Kubernetes, refer to [Decommissioning and draining on Kubernetes](#decommissioning-and-draining-on-kubernetes). For more details about graceful termination in a CockroachDB {{ site.data.products.advanced }} cluster, refer to [Decommissioning and draining on CockroachDB {{ site.data.products.advanced }}](#decommissioning-and-draining-on-cockroachdb-advanced). {{site.data.alerts.end}}
@@ -62,12 +62,16 @@ After this stage, the node is automatically drained. However, to avoid possible
-An operator [initiates the draining process](#drain-the-node-and-terminate-the-node-process) on the node. Draining a node disconnects clients after active queries are completed, and transfers any [range leases]{% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) and [Raft leaderships]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft) to other nodes, but does not move replicas or data off of the node. When draining is complete, you can send a `SIGTERM` signal to the `cockroach` process to shut it down, perform the required maintenance, and then restart the `cockroach` process on the node. +An operator [initiates the draining process](#drain-the-node-and-terminate-the-node-process) on the node. Draining a node disconnects clients after active queries are completed, and transfers any [range leases]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) and [Raft leaderships]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft) to other nodes, but does not move replicas or data off of the node. -{% capture drain_early_termination_warning %}Do not terminate the `cockroach` process before all of the phases of draining are complete. Otherwise, you may experience latency spikes until the[leases]({% link {{ page.version.version }}/architecture/glossary.md %}#leaseholder) that were on that node have transitioned to other nodes. It is safe to terminate the `cockroach` process only after a node has completed the drain process. This is especially important in a containerized system, to allow all TCP connections to terminate gracefully.{% endcapture %} +When draining is complete, the node must be shut down prior to any maintenance. After a 60-second wait at minimum, you can send a `SIGTERM` signal to the `cockroach` process to shut it down. {% include_cached new-in.html version="v24.2" %}The `--shutdown` flag of [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags) automatically terminates the `cockroach` process after draining completes. + +After you perform the required maintenance, you can restart the `cockroach` process on the node for it to rejoin the cluster. + +{% capture drain_early_termination_warning %}Do not terminate the `cockroach` process before all of the phases of draining are complete. Otherwise, you may experience latency spikes until the [leases]({% link {{ page.version.version }}/architecture/glossary.md %}#leaseholder) that were on that node have transitioned to other nodes. It is safe to terminate the `cockroach` process only after a node has completed the drain process. This is especially important in a containerized system, to allow all TCP connections to terminate gracefully.{% endcapture %} {{site.data.alerts.callout_danger}} -{{ drain_early_termination_warning }} If necessary, adjust the [`server.shutdown.initial_wait`](#server-shutdown-initial_wait) cluster setting and the [termination grace period]({% link {{ page.version.version }}/node-shutdown.md %}?filters=decommission#termination-grace-period) and adjust your process manager or other deployment tooling to allow adequate time for the node to finish draining before it is terminated or restarted. +{{ drain_early_termination_warning }} If necessary, adjust the [`server.shutdown.initial_wait`](#server-shutdown-initial_wait) and the [termination grace period]({% link {{ page.version.version}}/node-shutdown.md %}?filters=decommission#termination-grace-period) cluster settings and adjust your process manager or other deployment tooling to allow adequate time for the node to finish draining before it is terminated or restarted. {{site.data.alerts.end}}
@@ -89,7 +93,7 @@ Node drain consists of the following consecutive phases: 1. **Lease transfer phase:** The node's [`is_draining`]({% link {{ page.version.version }}/cockroach-node.md %}#node-status) field is set to `true`, which removes the node as a candidate for replica rebalancing, lease transfers, and query planning. Any [range leases]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) or [Raft leaderships]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft) must be transferred to other nodes. This phase completes when all range leases and Raft leaderships have been transferred.
- Since all [range replicas]({% link {{ page.version.version }}/architecture/glossary.md %}#replica) were already removed from the node during the [draining](#draining) stage, this step immediately resolves. + Since all range replicas were already removed from the node during the [draining](#draining) stage, this step immediately resolves.
@@ -116,7 +120,10 @@ After draining and decommissioning are complete, an operator [terminates the nod After draining is complete: - If the node was drained automatically because the `cockroach` process received a `SIGTERM` signal, the `cockroach` process is automatically terminated when draining is complete. -- If the node was drained manually because an operator issued a `cockroach node drain` command, the `cockroach` process must be terminated manually. A minimum of 60 seconds after draining is complete, send it a `SIGTERM` signal to terminate it. Refer to [Terminate the node process](#drain-the-node-and-terminate-the-node-process). +- If the node was drained manually because an operator issued a `cockroach node drain` command: + - {% include_cached new-in.html version="v24.2" %}If you pass the `--shutdown` flag to [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags), the `cockroach` process terminates automatically after draining completes. + - If the node's major version is being updated, the `cockroach` process terminates automatically after draining completes. + - Otherwise, the `cockroach` process must be terminated manually. A minimum of 60 seconds after draining is complete, send it a `SIGTERM` signal to terminate it. Refer to [Terminate the node process](#drain-the-node-and-terminate-the-node-process).
@@ -176,7 +183,7 @@ SET CLUSTER SETTING server.shutdown.initial_wait = '8s'; ~~~ #### `server.shutdown.connections.timeout` - + Alias: `server.shutdown.connection_wait` @@ -209,11 +216,11 @@ If there are still open transactions on the draining node when the server closes {% include {{page.version.version}}/sql/sql-defaults-cluster-settings-deprecation-notice.md %} #### `server.shutdown.lease_transfer_iteration.timeout` - + Alias: `server.shutdown.lease_transfer_wait` -n the ["lease transfer phase"](#draining) of node drain, the server attempts to transfer all [range leases]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) and [Raft leaderships]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft) from the draining node. [`server.shutdown.lease_transfer_iteration.timeout`]({% link {{ page.version.version }}/cluster-settings.md %}#setting-server-shutdown-lease-transfer-wait) sets the maximum duration of each iteration of this attempt. Because this phase does not exit until all transfers are completed, changing this value affects only the frequency at which drain progress messages are printed. +In the ["lease transfer phase"](#draining) of node drain, the server attempts to transfer all [range leases]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) and [Raft leaderships]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft) from the draining node. `server.shutdown.lease_transfer_iteration.timeout` sets the maximum duration of each iteration of this attempt (`5s` by default). Because this phase does not exit until all transfers are completed, changing this value affects only the frequency at which drain progress messages are printed.
In most cases, the default value is suitable. Do **not** set `server.shutdown.lease_transfer_iteration.timeout` to a value lower than `5s`. In this case, leases can fail to transfer and node drain will not be able to complete. @@ -224,7 +231,7 @@ Since [decommissioning](#decommissioning) a node rebalances all of its range rep
{{site.data.alerts.callout_info}} -The sum of [`server.shutdown.initial_wait`](#server-shutdown-initial_wait), [`server.shutdown.connections.timeout`](#server-shutdown-connections-timeout), [`server.shutdown.transactions.timeout`](#server-shutdown-transactions-timeout) times two, and [`server.shutdown.lease_transfer_iteration.timeout`](#server-shutdown-lease_transfer_iteration-timeout) should not be greater than the configured [drain timeout](#drain-timeout). +The sum of [`server.shutdown.initial_wait`](#server-shutdown-connections-timeout), [`server.shutdown.connections.timeout`](#server-shutdown-connections-timeout), [`server.shutdown.transactions.timeout`](#server-shutdown-transactions-timeout) times two, and [`server.shutdown.lease_transfer_iteration.timeout`](#server-shutdown-lease_transfer_iteration-timeout) should not be greater than the configured [drain timeout](#drain-timeout). {{site.data.alerts.end}} #### `kv.allocator.recovery_store_selector` @@ -266,7 +273,7 @@ A very long drain may indicate an anomaly, and you should manually inspect the s CockroachDB automatically increases the verbosity of logging when it detects a stall in the range lease transfer stage of `node drain`. Messages logged during such a stall include the time an attempt occurred, the total duration stalled waiting for the transfer attempt to complete, and the lease that is being transferred. -`--drain-wait` sets the timeout for [all draining phases](#draining) and is **not** related to the `server.shutdown.initial_wait` cluster setting, which configures the "unready phase" of draining. The value of `--drain-wait` should be greater than the sum of [`server.shutdown.initial_wait`](#server-shutdown-initial_wait), [`server.shutdown.connections.timeout`](#server-shutdown-connections-timeout), [`server.shutdown.transactions.timeout`](#server-shutdown-transactions-timeout) times two, and [`server.shutdown.lease_transfer_iteration.timeout`](#server-shutdown-lease_transfer_iteration-timeout). +`--drain-wait` sets the timeout for [all draining phases](#draining) and is **not** related to the `server.shutdown.initial_wait` cluster setting, which configures the "unready phase" of draining. The value of `--drain-wait` should be greater than the sum of [`server.shutdown.initial_wait`](#server-shutdown-connections-timeout), [`server.shutdown.connections.timeout`](#server-shutdown-connections-timeout), [`server.shutdown.transactions.timeout`](#server-shutdown-transactions-timeout) times two, and [`server.shutdown.lease_transfer_iteration.timeout`](#server-shutdown-lease_transfer_iteration-timeout). ### Termination grace period @@ -304,25 +311,25 @@ This can lead to disk utilization imbalance across nodes. **This is expected beh In this scenario, each range is replicated 3 times, with each replica on a different node: -
Decommission Scenario 1
+
Decommission Scenario 1
If you try to decommission a node, the process will hang indefinitely because the cluster cannot move the decommissioning node's replicas to the other 2 nodes, which already have a replica of each range: -
Decommission Scenario 1
+
Decommission Scenario 1
To successfully decommission a node in this cluster, you need to **add a 4th node**. The decommissioning process can then complete: -
Decommission Scenario 1
+
Decommission Scenario 1
#### 5-node cluster with 3-way replication In this scenario, like in the scenario above, each range is replicated 3 times, with each replica on a different node: -
Decommission Scenario 1
+
Decommission Scenario 1
If you decommission a node, the process will run successfully because the cluster will be able to move the node's replicas to other nodes without doubling up any range replicas: -
Decommission Scenario 1
+
Decommission Scenario 1
@@ -337,7 +344,7 @@ After [preparing for graceful shutdown](#prepare-for-graceful-shutdown), do the
{{site.data.alerts.callout_success}} -This guidance applies to manual deployments. In a Kubernetes deployment or a CockroachDB {{ site.data.products.dedicated }} cluster, terminating the `cockroach` process is handled through Kubernetes. Refer to [Decommissioning and draining on Kubernetes](#decommissioning-and-draining-on-kubernetes) and [Decommissioning and draining on CockroachDB {{ site.data.products.dedicated }}](#decommissioning-and-draining-on-cockroachdb-dedicated). +This guidance applies to manual deployments. In a Kubernetes deployment or a CockroachDB {{ site.data.products.advanced }} cluster, terminating the `cockroach` process is handled through Kubernetes. Refer to [Decommissioning and draining on Kubernetes](#decommissioning-and-draining-on-kubernetes) and [Decommissioning and draining on CockroachDB {{ site.data.products.advanced }}](#decommissioning-and-draining-on-cockroachdb-advanced). {{site.data.alerts.end}}
@@ -363,6 +370,10 @@ Do **not** terminate the node process, delete the storage volume, or remove the
### Drain the node and terminate the node process +{% include_cached new-in.html version="v24.2" %}If you passed the `--shutdown` flag to [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags), the `cockroach` process terminates automatically after draining completes. Otherwise, terminate the `cockroach` process. + +Perform maintenance on the node as required, then restart the `cockroach` process for the node to rejoin the cluster. + {{site.data.alerts.callout_success}} To drain the node without process termination, see [Drain a node manually](#drain-a-node-manually). {{site.data.alerts.end}} @@ -552,7 +563,7 @@ To drain and shut down a node that was started in the foreground with [`cockroac You can use [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}) to drain a node separately from decommissioning the node or terminating the node process. -1. Run the `cockroach node drain` command, specifying the ID of the node to drain (and optionally a custom [drain timeout](#drain-timeout) to allow draining more time to complete): +1. Run the `cockroach node drain` command, specifying the ID of the node to drain (and optionally a custom [drain timeout](#drain-timeout) to allow draining more time to complete). {% include_cached new-in.html version="v24.2" %}You can optionally pass the `--shutdown` flag to [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags) to automatically terminate the `cockroach` process after draining completes. {% include_cached copy-clipboard.html %} ~~~ shell @@ -615,7 +626,7 @@ This example assumes you will decommission node IDs `4` and `5` of a 5-node clus #### Step 2. Drain the nodes manually -Run the [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}) command for each node to be removed, specifying the ID of the node to drain: +Run the [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}) command for each node to be removed, specifying the ID of the node to drain. {% include_cached new-in.html version="v24.2" %}Optionally, pass the `--shutdown` flag of [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags) to automatically terminate the `cockroach` process after draining completes. {% include_cached copy-clipboard.html %} ~~~ shell @@ -849,7 +860,7 @@ For clusters deployed using the CockroachDB Helm chart or a manual StatefulSet, Cockroach Labs recommends that you: - Set `terminationGracePeriodSeconds` to no shorter than 300 seconds (5 minutes). This recommendation has been validated over time for many production workloads. In most cases, a value higher than 300 seconds (5 minutes) is not required. If CockroachDB takes longer than 5 minutes to gracefully stop, this may indicate an underlying configuration problem. Test the value you select against representative workloads before rolling out the change to production clusters. -- Set `terminationGracePeriodSeconds` to be at least 5 seconds longer than the configured [drain timeout](#server-shutdown-initial_wait), to allow the node to complete draining before Kubernetes removes the Kubernetes pod for the CockroachDB node. +- Set `terminationGracePeriodSeconds` to be at least 5 seconds longer than the configured [drain timeout](#server-shutdown-connections-timeout), to allow the node to complete draining before Kubernetes removes the Kubernetes pod for the CockroachDB node. - Ensure that the **sum** of the following `server.shutdown.*` settings for the CockroachDB cluster do not exceed the deployment's `terminationGracePeriodSeconds`, to reduce the likelihood that a node must be terminated forcibly. - [`server.shutdown.initial_wait`](#server-shutdown-initial_wait) @@ -861,13 +872,13 @@ Cockroach Labs recommends that you: A client application's connection pool should have a maximum lifetime that is shorter than the Kubernetes deployment's [`server.shutdown.connections.timeout`](#server-shutdown-connections-timeout) setting. - + -## Decommissioning and draining on CockroachDB {{ site.data.products.dedicated }} +## Decommissioning and draining on CockroachDB {{ site.data.products.advanced }} -Most of the guidance in this page is most relevant to manual deployments, although decommissioning and draining work the same way behind the scenes in a CockroachDB {{ site.data.products.dedicated }} cluster. CockroachDB {{ site.data.products.dedicated }} clusters have a `server.shutdown.connections.timeout` of 1800 seconds (30 minutes) and a termination grace period that is slightly longer. The termination grace period is not configurable, and adjusting `server.shutdown.connections.timeout` is generally not recommended. +Most of the guidance in this page is most relevant to manual deployments, although decommissioning and draining work the same way behind the scenes in a CockroachDB {{ site.data.products.advanced }} cluster. CockroachDB {{ site.data.products.advanced }} clusters have a `server.shutdown.connections.timeout` of 1800 seconds (30 minutes) and a termination grace period that is slightly longer. The termination grace period is not configurable, and adjusting `server.shutdown.connections.timeout` is generally not recommended. -Client applications or application servers that connect to CockroachDB {{ site.data.products.dedicated }} clusters should use connection pools that have a maximum lifetime that is shorter than the [`server.shutdown.connections.timeout`](#server-shutdown-connections-timeout) setting. +Client applications or application servers that connect to CockroachDB {{ site.data.products.advanced }} clusters should use connection pools that have a maximum lifetime that is shorter than the [`server.shutdown.connections.timeout`](#server-shutdown-connections-timeout) setting. ## See also diff --git a/src/current/v23.2/orchestrate-a-local-cluster-with-kubernetes.md b/src/current/v23.2/orchestrate-a-local-cluster-with-kubernetes.md index 4858b597d3e..3420bd25430 100644 --- a/src/current/v23.2/orchestrate-a-local-cluster-with-kubernetes.md +++ b/src/current/v23.2/orchestrate-a-local-cluster-with-kubernetes.md @@ -20,7 +20,6 @@ To orchestrate a physically distributed cluster in production, see [Orchestrated {% include {{ page.version.version }}/orchestration/kubernetes-limitations.md %} - {% include {{ page.version.version }}/orchestration/local-start-kubernetes.md %} ## Step 2. Start CockroachDB @@ -63,7 +62,7 @@ Choose a way to deploy and maintain the CockroachDB cluster: {% include_cached copy-clipboard.html %} ~~~ shell - $ minikube stop + minikube stop ~~~ ~~~ @@ -77,7 +76,7 @@ Choose a way to deploy and maintain the CockroachDB cluster: {% include_cached copy-clipboard.html %} ~~~ shell - $ minikube delete + minikube delete ~~~ ~~~ @@ -85,7 +84,9 @@ Choose a way to deploy and maintain the CockroachDB cluster: Machine deleted. ~~~ - {{site.data.alerts.callout_success}}To retain logs, copy them from each pod's stderr before deleting the cluster and all its resources. To access a pod's standard error stream, run kubectl logs <podname>.{{site.data.alerts.end}} + {{site.data.alerts.callout_success}} + To retain logs, copy them from each pod's `stderr` before deleting the cluster and all its resources. To access a pod's standard error stream, run `kubectl logs <podname>`. + {{site.data.alerts.end}} ## See also diff --git a/src/current/v23.2/simulate-a-multi-region-cluster-on-localhost.md b/src/current/v23.2/simulate-a-multi-region-cluster-on-localhost.md index 4a41c8d7442..14836ed353d 100644 --- a/src/current/v23.2/simulate-a-multi-region-cluster-on-localhost.md +++ b/src/current/v23.2/simulate-a-multi-region-cluster-on-localhost.md @@ -5,15 +5,15 @@ toc: true docs_area: deploy --- - Once you've [installed CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}), it's simple to simulate multi-region cluster on your local machine using [`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %}). This is a useful way to start playing with the [improved multi-region abstractions]({% link {{ page.version.version }}/multiregion-overview.md %}) provided by CockroachDB. + Once you've [installed CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}), you can simulate multi-region cluster on your local machine using [`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %})to learn about CockroachDB's [multi-region abstractions]({% link {{ page.version.version }}/multiregion-overview.md %}). {{site.data.alerts.callout_info}} -[`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %}) is not suitable for production deployments. Additionally, simulating multiple geographically distributed nodes on a single host is not representative of the [performance you should expect]({% link {{ page.version.version }}/frequently-asked-questions.md %}#single-row-perf) of a production deployment. For instructions showing how to do production multi-region deployments, see [Orchestrate CockroachDB Across Multiple Kubernetes Clusters]({% link {{ page.version.version }}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md %}) and [Deploy a Global, Serverless Application]({% link {{ page.version.version }}/movr-flask-deployment.md %}). Also be sure to review the [Production Checklist](recommended-production-settings.html). +[`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %}) is not suitable for production deployments. Additionally, simulating multiple geographically distributed nodes on a single host is not representative of the [performance you should expect]({% link {{ page.version.version }}/frequently-asked-questions.md %}#single-row-perf) in a production deployment. To learn more about production multi-region deployments, refer to [Orchestrate CockroachDB Across Multiple Kubernetes Clusters]({% link {{ page.version.version }}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md %}) and [Deploy a Global, Serverless Application]({% link {{ page.version.version }}/movr-flask-deployment.md %}), and review the [Production Checklist](recommended-production-settings.html). {{site.data.alerts.end}} ## Before you begin -- Make sure you have already [installed CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}). +[Download]({% link releases/index.md %}) and [Install CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}). ## Step 1. Start the cluster diff --git a/src/current/v24.1/deploy-cockroachdb-with-kubernetes.md b/src/current/v24.1/deploy-cockroachdb-with-kubernetes.md index a0f697071b2..57d63b734b2 100644 --- a/src/current/v24.1/deploy-cockroachdb-with-kubernetes.md +++ b/src/current/v24.1/deploy-cockroachdb-with-kubernetes.md @@ -38,7 +38,7 @@ Choose how you want to deploy and maintain the CockroachDB cluster. {{site.data.alerts.callout_info}} The [CockroachDB Kubernetes Operator](https://github.com/cockroachdb/cockroach-operator) eases CockroachDB cluster creation and management on a single Kubernetes cluster. -Note that the Operator does not provision or apply an Enterprise license key. To use [Enterprise features]({% link {{ page.version.version }}/enterprise-licensing.md %}) with the Operator, [set a license]({% link {{ page.version.version }}/licensing-faqs.md %}#set-a-license) in the SQL shell. +The Operator does not provision or apply an Enterprise license key. To use [Enterprise features]({% link {{ page.version.version }}/enterprise-licensing.md %}) with the Operator, [set a license]({% link {{ page.version.version }}/licensing-faqs.md %}#set-a-license) in the SQL shell. {{site.data.alerts.end}}
@@ -70,7 +70,7 @@ Note that the Operator does not provision or apply an Enterprise license key. To ## Step 5. Stop the cluster {{site.data.alerts.callout_info}} -If you want to continue using this cluster, see the documentation on [configuring]({% link {{ page.version.version }}/configure-cockroachdb-kubernetes.md %}), [scaling]({% link {{ page.version.version }}/scale-cockroachdb-kubernetes.md %}), [monitoring]({% link {{ page.version.version }}/monitor-cockroachdb-kubernetes.md %}), and [upgrading]({% link {{ page.version.version }}/upgrade-cockroachdb-kubernetes.md %}) the cluster. +If you want to continue using this cluster, refer the documentation on [configuring]({% link {{ page.version.version }}/configure-cockroachdb-kubernetes.md %}), [scaling]({% link {{ page.version.version }}/scale-cockroachdb-kubernetes.md %}), [monitoring]({% link {{ page.version.version }}/monitor-cockroachdb-kubernetes.md %}), and [upgrading]({% link {{ page.version.version }}/upgrade-cockroachdb-kubernetes.md %}) the cluster. {{site.data.alerts.end}} {% include {{ page.version.version }}/orchestration/kubernetes-stop-cluster.md %} diff --git a/src/current/v24.1/node-shutdown.md b/src/current/v24.1/node-shutdown.md index 659a6067bf1..4c549df1edc 100644 --- a/src/current/v24.1/node-shutdown.md +++ b/src/current/v24.1/node-shutdown.md @@ -13,7 +13,7 @@ There are two ways to handle node shutdown: - Clients are disconnected, and subsequent connection requests are sent to other nodes. - The node's data store is preserved and will be reused as long as the node restarts in a short time. Otherwise, the node's data is moved to other nodes. - After the node is drained, you can terminate the `cockroach` process, perform maintenance, then restart it. CockroachDB automatically drains a node when [upgrading its cluster version]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}). Draining a node is lightweight because it generates little node-to-node traffic across the cluster. + After the node is drained, you can manually terminate the `cockroach` process to perform maintenance, then restart the process for the node to rejoin the cluster. {% include_cached new-in.html version="v24.2" %}The `--shutdown` flag of [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags) automatically terminates the `cockroach` process after draining completes. A node is also automatically drained when [upgrading its major version]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}). Draining a node is lightweight because it generates little node-to-node traffic across the cluster. - **Decommission a node** to permanently remove it from the cluster, such as when scaling down the cluster or to replace the node due to hardware failure. During decommission: - The node is drained automatically if you have not manually drained it. - The node's data is moved off the node to other nodes. This [replica rebalancing]({% link {{ page.version.version }}/architecture/glossary.md %}#replica) generates a large amount of node-to-node network traffic, so decommissioning a node is considered a heavyweight operation. @@ -23,10 +23,10 @@ This page describes: - The details of the [node shutdown sequence](#node-shutdown-sequence) from the point of view of the `cockroach` process on a CockroachDB node. - How to [prepare for graceful shutdown](#prepare-for-graceful-shutdown) on CockroachDB {{ site.data.products.core }} clusters by coordinating load balancer, client application server, process manager, and cluster settings. - How to [perform node shutdown](#perform-node-shutdown) on CockroachDB {{ site.data.products.core }} deployments by manually draining or decommissioning a node. -- How to handle node shutdown when CockroachDB is deployed using [Kubernetes](#decommissioning-and-draining-on-kubernetes) or in a [CockroachDB {{ site.data.products.dedicated }} cluster](#decommissioning-and-draining-on-cockroachdb-dedicated). +- How to handle node shutdown when CockroachDB is deployed using [Kubernetes](#decommissioning-and-draining-on-kubernetes) or in a [CockroachDB {{ site.data.products.advanced }} cluster](#decommissioning-and-draining-on-cockroachdb-advanced). {{site.data.alerts.callout_success}} -This guidance applies to primarily to manual deployments. For more details about graceful termination when CockroachDB is deployed using Kubernetes, refer to [Decommissioning and draining on Kubernetes](#decommissioning-and-draining-on-kubernetes). For more details about graceful termination in a CockroachDB {{ site.data.products.dedicated }} cluster, refer to [Decommissioning and draining on CockroachDB {{ site.data.products.dedicated }}](#decommissioning-and-draining-on-cockroachdb-dedicated). +This guidance applies to primarily to manual deployments. For more details about graceful termination when CockroachDB is deployed using Kubernetes, refer to [Decommissioning and draining on Kubernetes](#decommissioning-and-draining-on-kubernetes). For more details about graceful termination in a CockroachDB {{ site.data.products.advanced }} cluster, refer to [Decommissioning and draining on CockroachDB {{ site.data.products.advanced }}](#decommissioning-and-draining-on-cockroachdb-advanced). {{site.data.alerts.end}}
@@ -62,7 +62,11 @@ After this stage, the node is automatically drained. However, to avoid possible
-An operator [initiates the draining process](#drain-the-node-and-terminate-the-node-process) on the node. Draining a node disconnects clients after active queries are completed, and transfers any [range leases]{% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) and [Raft leaderships]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft) to other nodes, but does not move replicas or data off of the node. When draining is complete, you can send a `SIGTERM` signal to the `cockroach` process to shut it down, perform the required maintenance, and then restart the `cockroach` process on the node. +An operator [initiates the draining process](#drain-the-node-and-terminate-the-node-process) on the node. Draining a node disconnects clients after active queries are completed, and transfers any [range leases]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) and [Raft leaderships]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft) to other nodes, but does not move replicas or data off of the node. + +When draining is complete, the node must be shut down prior to any maintenance. After a 60-second wait at minimum, you can send a `SIGTERM` signal to the `cockroach` process to shut it down. {% include_cached new-in.html version="v24.2" %}The `--shutdown` flag of [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags) automatically terminates the `cockroach` process after draining completes. + +After you perform the required maintenance, you can restart the `cockroach` process on the node for it to rejoin the cluster. {% capture drain_early_termination_warning %}Do not terminate the `cockroach` process before all of the phases of draining are complete. Otherwise, you may experience latency spikes until the [leases]({% link {{ page.version.version }}/architecture/glossary.md %}#leaseholder) that were on that node have transitioned to other nodes. It is safe to terminate the `cockroach` process only after a node has completed the drain process. This is especially important in a containerized system, to allow all TCP connections to terminate gracefully.{% endcapture %} @@ -116,7 +120,10 @@ After draining and decommissioning are complete, an operator [terminates the nod After draining is complete: - If the node was drained automatically because the `cockroach` process received a `SIGTERM` signal, the `cockroach` process is automatically terminated when draining is complete. -- If the node was drained manually because an operator issued a `cockroach node drain` command, the `cockroach` process must be terminated manually. A minimum of 60 seconds after draining is complete, send it a `SIGTERM` signal to terminate it. Refer to [Terminate the node process](#drain-the-node-and-terminate-the-node-process). +- If the node was drained manually because an operator issued a `cockroach node drain` command: + - {% include_cached new-in.html version="v24.2" %}If you pass the `--shutdown` flag to [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags), the `cockroach` process terminates automatically after draining completes. + - If the node's major version is being updated, the `cockroach` process terminates automatically after draining completes. + - Otherwise, the `cockroach` process must be terminated manually. A minimum of 60 seconds after draining is complete, send it a `SIGTERM` signal to terminate it. Refer to [Terminate the node process](#drain-the-node-and-terminate-the-node-process).
@@ -337,7 +344,7 @@ After [preparing for graceful shutdown](#prepare-for-graceful-shutdown), do the
{{site.data.alerts.callout_success}} -This guidance applies to manual deployments. In a Kubernetes deployment or a CockroachDB {{ site.data.products.dedicated }} cluster, terminating the `cockroach` process is handled through Kubernetes. Refer to [Decommissioning and draining on Kubernetes](#decommissioning-and-draining-on-kubernetes) and [Decommissioning and draining on CockroachDB {{ site.data.products.dedicated }}](#decommissioning-and-draining-on-cockroachdb-dedicated). +This guidance applies to manual deployments. In a Kubernetes deployment or a CockroachDB {{ site.data.products.advanced }} cluster, terminating the `cockroach` process is handled through Kubernetes. Refer to [Decommissioning and draining on Kubernetes](#decommissioning-and-draining-on-kubernetes) and [Decommissioning and draining on CockroachDB {{ site.data.products.advanced }}](#decommissioning-and-draining-on-cockroachdb-advanced). {{site.data.alerts.end}}
@@ -363,6 +370,10 @@ Do **not** terminate the node process, delete the storage volume, or remove the
### Drain the node and terminate the node process +{% include_cached new-in.html version="v24.2" %}If you passed the `--shutdown` flag to [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags), the `cockroach` process terminates automatically after draining completes. Otherwise, terminate the `cockroach` process. + +Perform maintenance on the node as required, then restart the `cockroach` process for the node to rejoin the cluster. + {{site.data.alerts.callout_success}} To drain the node without process termination, see [Drain a node manually](#drain-a-node-manually). {{site.data.alerts.end}} @@ -552,7 +563,7 @@ To drain and shut down a node that was started in the foreground with [`cockroac You can use [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}) to drain a node separately from decommissioning the node or terminating the node process. -1. Run the `cockroach node drain` command, specifying the ID of the node to drain (and optionally a custom [drain timeout](#drain-timeout) to allow draining more time to complete): +1. Run the `cockroach node drain` command, specifying the ID of the node to drain (and optionally a custom [drain timeout](#drain-timeout) to allow draining more time to complete). {% include_cached new-in.html version="v24.2" %}You can optionally pass the `--shutdown` flag to [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags) to automatically terminate the `cockroach` process after draining completes. {% include_cached copy-clipboard.html %} ~~~ shell @@ -615,7 +626,7 @@ This example assumes you will decommission node IDs `4` and `5` of a 5-node clus #### Step 2. Drain the nodes manually -Run the [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}) command for each node to be removed, specifying the ID of the node to drain: +Run the [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}) command for each node to be removed, specifying the ID of the node to drain. {% include_cached new-in.html version="v24.2" %}Optionally, pass the `--shutdown` flag of [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags) to automatically terminate the `cockroach` process after draining completes. {% include_cached copy-clipboard.html %} ~~~ shell @@ -861,13 +872,13 @@ Cockroach Labs recommends that you: A client application's connection pool should have a maximum lifetime that is shorter than the Kubernetes deployment's [`server.shutdown.connections.timeout`](#server-shutdown-connections-timeout) setting. - + -## Decommissioning and draining on CockroachDB {{ site.data.products.dedicated }} +## Decommissioning and draining on CockroachDB {{ site.data.products.advanced }} -Most of the guidance in this page is most relevant to manual deployments, although decommissioning and draining work the same way behind the scenes in a CockroachDB {{ site.data.products.dedicated }} cluster. CockroachDB {{ site.data.products.dedicated }} clusters have a `server.shutdown.connections.timeout` of 1800 seconds (30 minutes) and a termination grace period that is slightly longer. The termination grace period is not configurable, and adjusting `server.shutdown.connections.timeout` is generally not recommended. +Most of the guidance in this page is most relevant to manual deployments, although decommissioning and draining work the same way behind the scenes in a CockroachDB {{ site.data.products.advanced }} cluster. CockroachDB {{ site.data.products.advanced }} clusters have a `server.shutdown.connections.timeout` of 1800 seconds (30 minutes) and a termination grace period that is slightly longer. The termination grace period is not configurable, and adjusting `server.shutdown.connections.timeout` is generally not recommended. -Client applications or application servers that connect to CockroachDB {{ site.data.products.dedicated }} clusters should use connection pools that have a maximum lifetime that is shorter than the [`server.shutdown.connections.timeout`](#server-shutdown-connections-timeout) setting. +Client applications or application servers that connect to CockroachDB {{ site.data.products.advanced }} clusters should use connection pools that have a maximum lifetime that is shorter than the [`server.shutdown.connections.timeout`](#server-shutdown-connections-timeout) setting. ## See also diff --git a/src/current/v24.1/orchestrate-a-local-cluster-with-kubernetes.md b/src/current/v24.1/orchestrate-a-local-cluster-with-kubernetes.md index 7c2b9e6b162..45bd838e61f 100644 --- a/src/current/v24.1/orchestrate-a-local-cluster-with-kubernetes.md +++ b/src/current/v24.1/orchestrate-a-local-cluster-with-kubernetes.md @@ -16,7 +16,8 @@ This page demonstrates a basic integration with the open-source [Kubernetes](htt To orchestrate a physically distributed cluster in production, see [Orchestrated Deployments]({% link {{ page.version.version }}/kubernetes-overview.md %}). To deploy a 30-day free CockroachDB {{ site.data.products.dedicated }} cluster instead of running CockroachDB yourself, see the [Quickstart]({% link cockroachcloud/quickstart.md %}). {{site.data.alerts.end}} -## Best practices + +## Limitations {% include {{ page.version.version }}/orchestration/kubernetes-limitations.md %} @@ -62,7 +63,7 @@ Choose a way to deploy and maintain the CockroachDB cluster: {% include_cached copy-clipboard.html %} ~~~ shell - $ minikube stop + minikube stop ~~~ ~~~ @@ -76,7 +77,7 @@ Choose a way to deploy and maintain the CockroachDB cluster: {% include_cached copy-clipboard.html %} ~~~ shell - $ minikube delete + minikube delete ~~~ ~~~ @@ -84,7 +85,9 @@ Choose a way to deploy and maintain the CockroachDB cluster: Machine deleted. ~~~ - {{site.data.alerts.callout_success}}To retain logs, copy them from each pod's stderr before deleting the cluster and all its resources. To access a pod's standard error stream, run kubectl logs <podname>.{{site.data.alerts.end}} + {{site.data.alerts.callout_success}} + To retain logs, copy them from each pod's `stderr` before deleting the cluster and all its resources. To access a pod's standard error stream, run `kubectl logs <podname>`. + {{site.data.alerts.end}} ## See also diff --git a/src/current/v24.1/simulate-a-multi-region-cluster-on-localhost.md b/src/current/v24.1/simulate-a-multi-region-cluster-on-localhost.md index 4a41c8d7442..14836ed353d 100644 --- a/src/current/v24.1/simulate-a-multi-region-cluster-on-localhost.md +++ b/src/current/v24.1/simulate-a-multi-region-cluster-on-localhost.md @@ -5,15 +5,15 @@ toc: true docs_area: deploy --- - Once you've [installed CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}), it's simple to simulate multi-region cluster on your local machine using [`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %}). This is a useful way to start playing with the [improved multi-region abstractions]({% link {{ page.version.version }}/multiregion-overview.md %}) provided by CockroachDB. + Once you've [installed CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}), you can simulate multi-region cluster on your local machine using [`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %})to learn about CockroachDB's [multi-region abstractions]({% link {{ page.version.version }}/multiregion-overview.md %}). {{site.data.alerts.callout_info}} -[`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %}) is not suitable for production deployments. Additionally, simulating multiple geographically distributed nodes on a single host is not representative of the [performance you should expect]({% link {{ page.version.version }}/frequently-asked-questions.md %}#single-row-perf) of a production deployment. For instructions showing how to do production multi-region deployments, see [Orchestrate CockroachDB Across Multiple Kubernetes Clusters]({% link {{ page.version.version }}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md %}) and [Deploy a Global, Serverless Application]({% link {{ page.version.version }}/movr-flask-deployment.md %}). Also be sure to review the [Production Checklist](recommended-production-settings.html). +[`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %}) is not suitable for production deployments. Additionally, simulating multiple geographically distributed nodes on a single host is not representative of the [performance you should expect]({% link {{ page.version.version }}/frequently-asked-questions.md %}#single-row-perf) in a production deployment. To learn more about production multi-region deployments, refer to [Orchestrate CockroachDB Across Multiple Kubernetes Clusters]({% link {{ page.version.version }}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md %}) and [Deploy a Global, Serverless Application]({% link {{ page.version.version }}/movr-flask-deployment.md %}), and review the [Production Checklist](recommended-production-settings.html). {{site.data.alerts.end}} ## Before you begin -- Make sure you have already [installed CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}). +[Download]({% link releases/index.md %}) and [Install CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}). ## Step 1. Start the cluster diff --git a/src/current/v24.2/deploy-cockroachdb-with-kubernetes.md b/src/current/v24.2/deploy-cockroachdb-with-kubernetes.md index a0f697071b2..57d63b734b2 100644 --- a/src/current/v24.2/deploy-cockroachdb-with-kubernetes.md +++ b/src/current/v24.2/deploy-cockroachdb-with-kubernetes.md @@ -38,7 +38,7 @@ Choose how you want to deploy and maintain the CockroachDB cluster. {{site.data.alerts.callout_info}} The [CockroachDB Kubernetes Operator](https://github.com/cockroachdb/cockroach-operator) eases CockroachDB cluster creation and management on a single Kubernetes cluster. -Note that the Operator does not provision or apply an Enterprise license key. To use [Enterprise features]({% link {{ page.version.version }}/enterprise-licensing.md %}) with the Operator, [set a license]({% link {{ page.version.version }}/licensing-faqs.md %}#set-a-license) in the SQL shell. +The Operator does not provision or apply an Enterprise license key. To use [Enterprise features]({% link {{ page.version.version }}/enterprise-licensing.md %}) with the Operator, [set a license]({% link {{ page.version.version }}/licensing-faqs.md %}#set-a-license) in the SQL shell. {{site.data.alerts.end}}
@@ -70,7 +70,7 @@ Note that the Operator does not provision or apply an Enterprise license key. To ## Step 5. Stop the cluster {{site.data.alerts.callout_info}} -If you want to continue using this cluster, see the documentation on [configuring]({% link {{ page.version.version }}/configure-cockroachdb-kubernetes.md %}), [scaling]({% link {{ page.version.version }}/scale-cockroachdb-kubernetes.md %}), [monitoring]({% link {{ page.version.version }}/monitor-cockroachdb-kubernetes.md %}), and [upgrading]({% link {{ page.version.version }}/upgrade-cockroachdb-kubernetes.md %}) the cluster. +If you want to continue using this cluster, refer the documentation on [configuring]({% link {{ page.version.version }}/configure-cockroachdb-kubernetes.md %}), [scaling]({% link {{ page.version.version }}/scale-cockroachdb-kubernetes.md %}), [monitoring]({% link {{ page.version.version }}/monitor-cockroachdb-kubernetes.md %}), and [upgrading]({% link {{ page.version.version }}/upgrade-cockroachdb-kubernetes.md %}) the cluster. {{site.data.alerts.end}} {% include {{ page.version.version }}/orchestration/kubernetes-stop-cluster.md %} diff --git a/src/current/v24.2/node-shutdown.md b/src/current/v24.2/node-shutdown.md index 479b8b6bebf..4c549df1edc 100644 --- a/src/current/v24.2/node-shutdown.md +++ b/src/current/v24.2/node-shutdown.md @@ -23,10 +23,10 @@ This page describes: - The details of the [node shutdown sequence](#node-shutdown-sequence) from the point of view of the `cockroach` process on a CockroachDB node. - How to [prepare for graceful shutdown](#prepare-for-graceful-shutdown) on CockroachDB {{ site.data.products.core }} clusters by coordinating load balancer, client application server, process manager, and cluster settings. - How to [perform node shutdown](#perform-node-shutdown) on CockroachDB {{ site.data.products.core }} deployments by manually draining or decommissioning a node. -- How to handle node shutdown when CockroachDB is deployed using [Kubernetes](#decommissioning-and-draining-on-kubernetes) or in a [CockroachDB {{ site.data.products.dedicated }} cluster](#decommissioning-and-draining-on-cockroachdb-dedicated). +- How to handle node shutdown when CockroachDB is deployed using [Kubernetes](#decommissioning-and-draining-on-kubernetes) or in a [CockroachDB {{ site.data.products.advanced }} cluster](#decommissioning-and-draining-on-cockroachdb-advanced). {{site.data.alerts.callout_success}} -This guidance applies to primarily to manual deployments. For more details about graceful termination when CockroachDB is deployed using Kubernetes, refer to [Decommissioning and draining on Kubernetes](#decommissioning-and-draining-on-kubernetes). For more details about graceful termination in a CockroachDB {{ site.data.products.dedicated }} cluster, refer to [Decommissioning and draining on CockroachDB {{ site.data.products.dedicated }}](#decommissioning-and-draining-on-cockroachdb-dedicated). +This guidance applies to primarily to manual deployments. For more details about graceful termination when CockroachDB is deployed using Kubernetes, refer to [Decommissioning and draining on Kubernetes](#decommissioning-and-draining-on-kubernetes). For more details about graceful termination in a CockroachDB {{ site.data.products.advanced }} cluster, refer to [Decommissioning and draining on CockroachDB {{ site.data.products.advanced }}](#decommissioning-and-draining-on-cockroachdb-advanced). {{site.data.alerts.end}}
@@ -344,7 +344,7 @@ After [preparing for graceful shutdown](#prepare-for-graceful-shutdown), do the
{{site.data.alerts.callout_success}} -This guidance applies to manual deployments. In a Kubernetes deployment or a CockroachDB {{ site.data.products.dedicated }} cluster, terminating the `cockroach` process is handled through Kubernetes. Refer to [Decommissioning and draining on Kubernetes](#decommissioning-and-draining-on-kubernetes) and [Decommissioning and draining on CockroachDB {{ site.data.products.dedicated }}](#decommissioning-and-draining-on-cockroachdb-dedicated). +This guidance applies to manual deployments. In a Kubernetes deployment or a CockroachDB {{ site.data.products.advanced }} cluster, terminating the `cockroach` process is handled through Kubernetes. Refer to [Decommissioning and draining on Kubernetes](#decommissioning-and-draining-on-kubernetes) and [Decommissioning and draining on CockroachDB {{ site.data.products.advanced }}](#decommissioning-and-draining-on-cockroachdb-advanced). {{site.data.alerts.end}}
@@ -872,13 +872,13 @@ Cockroach Labs recommends that you: A client application's connection pool should have a maximum lifetime that is shorter than the Kubernetes deployment's [`server.shutdown.connections.timeout`](#server-shutdown-connections-timeout) setting. - + -## Decommissioning and draining on CockroachDB {{ site.data.products.dedicated }} +## Decommissioning and draining on CockroachDB {{ site.data.products.advanced }} -Most of the guidance in this page is most relevant to manual deployments, although decommissioning and draining work the same way behind the scenes in a CockroachDB {{ site.data.products.dedicated }} cluster. CockroachDB {{ site.data.products.dedicated }} clusters have a `server.shutdown.connections.timeout` of 1800 seconds (30 minutes) and a termination grace period that is slightly longer. The termination grace period is not configurable, and adjusting `server.shutdown.connections.timeout` is generally not recommended. +Most of the guidance in this page is most relevant to manual deployments, although decommissioning and draining work the same way behind the scenes in a CockroachDB {{ site.data.products.advanced }} cluster. CockroachDB {{ site.data.products.advanced }} clusters have a `server.shutdown.connections.timeout` of 1800 seconds (30 minutes) and a termination grace period that is slightly longer. The termination grace period is not configurable, and adjusting `server.shutdown.connections.timeout` is generally not recommended. -Client applications or application servers that connect to CockroachDB {{ site.data.products.dedicated }} clusters should use connection pools that have a maximum lifetime that is shorter than the [`server.shutdown.connections.timeout`](#server-shutdown-connections-timeout) setting. +Client applications or application servers that connect to CockroachDB {{ site.data.products.advanced }} clusters should use connection pools that have a maximum lifetime that is shorter than the [`server.shutdown.connections.timeout`](#server-shutdown-connections-timeout) setting. ## See also diff --git a/src/current/v24.2/orchestrate-a-local-cluster-with-kubernetes.md b/src/current/v24.2/orchestrate-a-local-cluster-with-kubernetes.md index 7c2b9e6b162..45bd838e61f 100644 --- a/src/current/v24.2/orchestrate-a-local-cluster-with-kubernetes.md +++ b/src/current/v24.2/orchestrate-a-local-cluster-with-kubernetes.md @@ -16,7 +16,8 @@ This page demonstrates a basic integration with the open-source [Kubernetes](htt To orchestrate a physically distributed cluster in production, see [Orchestrated Deployments]({% link {{ page.version.version }}/kubernetes-overview.md %}). To deploy a 30-day free CockroachDB {{ site.data.products.dedicated }} cluster instead of running CockroachDB yourself, see the [Quickstart]({% link cockroachcloud/quickstart.md %}). {{site.data.alerts.end}} -## Best practices + +## Limitations {% include {{ page.version.version }}/orchestration/kubernetes-limitations.md %} @@ -62,7 +63,7 @@ Choose a way to deploy and maintain the CockroachDB cluster: {% include_cached copy-clipboard.html %} ~~~ shell - $ minikube stop + minikube stop ~~~ ~~~ @@ -76,7 +77,7 @@ Choose a way to deploy and maintain the CockroachDB cluster: {% include_cached copy-clipboard.html %} ~~~ shell - $ minikube delete + minikube delete ~~~ ~~~ @@ -84,7 +85,9 @@ Choose a way to deploy and maintain the CockroachDB cluster: Machine deleted. ~~~ - {{site.data.alerts.callout_success}}To retain logs, copy them from each pod's stderr before deleting the cluster and all its resources. To access a pod's standard error stream, run kubectl logs <podname>.{{site.data.alerts.end}} + {{site.data.alerts.callout_success}} + To retain logs, copy them from each pod's `stderr` before deleting the cluster and all its resources. To access a pod's standard error stream, run `kubectl logs <podname>`. + {{site.data.alerts.end}} ## See also diff --git a/src/current/v24.2/simulate-a-multi-region-cluster-on-localhost.md b/src/current/v24.2/simulate-a-multi-region-cluster-on-localhost.md index 4a41c8d7442..14836ed353d 100644 --- a/src/current/v24.2/simulate-a-multi-region-cluster-on-localhost.md +++ b/src/current/v24.2/simulate-a-multi-region-cluster-on-localhost.md @@ -5,15 +5,15 @@ toc: true docs_area: deploy --- - Once you've [installed CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}), it's simple to simulate multi-region cluster on your local machine using [`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %}). This is a useful way to start playing with the [improved multi-region abstractions]({% link {{ page.version.version }}/multiregion-overview.md %}) provided by CockroachDB. + Once you've [installed CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}), you can simulate multi-region cluster on your local machine using [`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %})to learn about CockroachDB's [multi-region abstractions]({% link {{ page.version.version }}/multiregion-overview.md %}). {{site.data.alerts.callout_info}} -[`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %}) is not suitable for production deployments. Additionally, simulating multiple geographically distributed nodes on a single host is not representative of the [performance you should expect]({% link {{ page.version.version }}/frequently-asked-questions.md %}#single-row-perf) of a production deployment. For instructions showing how to do production multi-region deployments, see [Orchestrate CockroachDB Across Multiple Kubernetes Clusters]({% link {{ page.version.version }}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md %}) and [Deploy a Global, Serverless Application]({% link {{ page.version.version }}/movr-flask-deployment.md %}). Also be sure to review the [Production Checklist](recommended-production-settings.html). +[`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %}) is not suitable for production deployments. Additionally, simulating multiple geographically distributed nodes on a single host is not representative of the [performance you should expect]({% link {{ page.version.version }}/frequently-asked-questions.md %}#single-row-perf) in a production deployment. To learn more about production multi-region deployments, refer to [Orchestrate CockroachDB Across Multiple Kubernetes Clusters]({% link {{ page.version.version }}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md %}) and [Deploy a Global, Serverless Application]({% link {{ page.version.version }}/movr-flask-deployment.md %}), and review the [Production Checklist](recommended-production-settings.html). {{site.data.alerts.end}} ## Before you begin -- Make sure you have already [installed CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}). +[Download]({% link releases/index.md %}) and [Install CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}). ## Step 1. Start the cluster