diff --git a/docs/content/preview/reference/configuration/yugabyted.md b/docs/content/preview/reference/configuration/yugabyted.md index 5077857defa8..029077be9f32 100644 --- a/docs/content/preview/reference/configuration/yugabyted.md +++ b/docs/content/preview/reference/configuration/yugabyted.md @@ -19,12 +19,12 @@ YugabyteDB uses a two-server architecture, with [YB-TServers](../yb-tserver/) ma {{< youtube id="ah_fPDpZjnc" title="How to Start YugabyteDB on Your Laptop" >}} -The yugabyted executable file is located in the YugabyteDB home's `bin` directory. +The `yugabyted` executable file is located in the YugabyteDB home's `bin` directory. For examples of using yugabyted to deploy single- and multi-node clusters, see [Examples](#examples). {{}} -You can use yugabyted for production deployments (v2.18.4 and later). You can also administer [yb-tserver](../yb-tserver/) and [yb-master](../yb-master/) directly (refer to [Deploy YugabyteDB](../../../deploy/)). +You can use yugabyted for production deployments (v2.18.4 and later). You can also administer [`yb-tserver`](../yb-tserver/) and [`yb-master`](../yb-master/) directly (refer to [Deploy YugabyteDB](../../../deploy/)). {{}} {{% note title="Running on macOS" %}} @@ -50,7 +50,7 @@ $ ./bin/yugabyted start ### Online help -You can access command-line help for yugabyted by running one of the following examples from the YugabyteDB home: +You can access command-line help for `yugabyted` by running one of the following examples from the YugabyteDB home: ```sh $ ./bin/yugabyted -h @@ -60,7 +60,7 @@ $ ./bin/yugabyted -h $ ./bin/yugabyted -help ``` -For help with specific yugabyted commands, run 'yugabyted [ command ] -h'. For example, you can print the command-line help for the `yugabyted start` command by running the following: +For help with specific `yugabyted` commands, run 'yugabyted [ command ] -h'. For example, you can print the command-line help for the `yugabyted start` command by running the following: ```sh $ ./bin/yugabyted start -h @@ -242,7 +242,7 @@ The following sub-commands are available for `yugabyted configure` command: #### data_placement -{{}} Use the `yugabyted configure data_placement` sub-command to set or modify placement policy of the nodes of the deployed cluster, and specify the [preferred region(s)](../../../architecture/key-concepts/#preferred-region). +Use the `yugabyted configure data_placement` sub-command to set or modify placement policy of the nodes of the deployed cluster, and specify the [preferred region(s)](../../../architecture/key-concepts/#preferred-region). For example, you would use the following command to create a multi-zone YugabyteDB cluster: @@ -714,12 +714,6 @@ Create a single-node locally and join other nodes that are part of the same clus --base_dir *base-directory* : The directory where yugabyted stores data, configurations, and logs. Must be an absolute path. ---data_dir *data-directory* -: The directory where yugabyted stores data. Must be an absolute path. Can be configured to a directory different from the one where configurations and logs are stored. - ---log_dir *log-directory* -: The directory to store yugabyted logs. Must be an absolute path. This flag controls where the logs of the YugabyteDB nodes are stored. By default, logs are written to `~/var/logs`. - --background *bool* : Enable or disable running yugabyted in the background as a daemon. Does not persist on restart. Default: `true` @@ -741,7 +735,7 @@ For on-premises deployments, consider racks as zones to treat them as fault doma : Encryption in transit requires SSL/TLS certificates for each node in the cluster. : - When starting a local single-node cluster, a certificate is automatically generated for the cluster. : - When deploying a node in a multi-node cluster, you need to generate the certificate for the node using the `--cert generate_server_certs` command and copy it to the node *before* you start the node using the `--secure` flag, or the node creation will fail. -: When authentication is enabled, the default user is `yugabyte` in YSQL, and `cassandra` in YCQL. When a cluster is started, yugabyted outputs a message `Credentials File is stored at ` with the credentials file location. +: When authentication is enabled, the default user is `yugabyte` in YSQL, and `cassandra` in YCQL. When a cluster is started,`yugabyted` outputs a message `Credentials File is stored at ` with the credentials file location. : For examples creating secure local multi-node, multi-zone, and multi-region clusters, refer to [Examples](#examples). --read_replica *read_replica_node* @@ -751,7 +745,7 @@ For on-premises deployments, consider racks as zones to treat them as fault doma : Enable or disable the backup daemon with yugabyted start. Default: `false` : If you start a cluster using the `--backup_daemon` flag, you also need to download and extract the [YB Controller release](https://downloads.yugabyte.com/ybc/2.1.0.0-b9/ybc-2.1.0.0-b9-linux-x86_64.tar.gz) to the yugabyte-{{< yb-version version="preview" >}} release directory. ---enable_pg_parity_tech_preview *PostgreSQL-compatibilty* +--enable_pg_parity_early_access *PostgreSQL-compatibilty* : Enable Enhanced PostgreSQL Compatibility Mode. Default: `false` #### Advanced flags @@ -782,6 +776,15 @@ Advanced flags can be set by using the configuration file in the `--config` flag --callhome *bool* : Enable or disable the *call home* feature that sends analytics data to Yugabyte. Default: `true`. +--data_dir *data-directory* +: The directory where yugabyted stores data. Must be an absolute path. Can be configured to a directory different from the one where configurations and logs are stored. + +--log_dir *log-directory* +: The directory to store yugabyted logs. Must be an absolute path. This flag controls where the logs of the YugabyteDB nodes are stored. By default, logs are written to `~//logs`. + +--certs_dir *certs-directory* +: The path to the directory which has the certificates to be used for secure deployment. Must be an absolute path. Default path is `~//certs`. + --master_flags *master_flags* : Specify extra [master flags](../../../reference/configuration/yb-master#configuration-flags) as a set of key value pairs. Format (key=value,key=value). : To specify any CSV value flags, enclose the values inside curly braces `{}`. Refer to [Pass additional flags to YB-Master and YB-TServer](#pass-additional-flags-to-yb-master-and-yb-tserver). @@ -904,24 +907,29 @@ Usage: yugabyted xcluster [command] [flags] The following sub-commands are available for the `yugabyted xcluster` command: -- [checkpoint](#checkpoint) +- [create_checkpoint](#create-checkpoint) +- [add_to_checkpoint](#add-to-checkpoint) - [set_up](#set-up) +- [add_to_replication](#add-to-replication) - [status](#status-1) -- [delete](#delete-1) +- [delete_replication](#delete-replication) +- [remove_database_from_replication](#remove-database-from-replication) -#### checkpoint +#### create_checkpoint -Use the sub-command `yugabyted xcluster checkpoint` to checkpoint a new xCluster replication between two clusters. This command needs to be run from the source cluster of the replication. +Use the sub-command `yugabyted xcluster create_checkpoint` to checkpoint a new xCluster replication between two clusters. This command needs to be run from the source cluster of the replication. For example, to create a new xCluster replication, execute the following command: ```sh -./bin/yugabyted xcluster checkpoint --replication_id --databases +./bin/yugabyted xcluster create_checkpoint \ + --replication_id \ + --databases ``` -The `checkpoint` command takes a snapshot of the database and determines whether any of the databases to be replicated need to be copied to the target ([bootstrapped](#bootstrap-databases-for-xcluster)). If bootstrapping is required for any database, yugabyted outputs a message `Bootstrap is required for database(s)` along with the commands required for bootstrapping. +The `create_checkpoint` command takes a snapshot of the database and determines whether any of the databases to be replicated need to be copied to the target ([bootstrapped](#bootstrap-databases-for-xcluster)). -##### checkpoint flags +##### create_checkpoint flags -h | --help : Print the command-line help and exit. @@ -935,6 +943,34 @@ The `checkpoint` command takes a snapshot of the database and determines whether --replication_id *xcluster-replication-id* : A string to uniquely identify the replication. +#### add_to_checkpoint + +Use the sub-command `yugabyted xcluster add_to_checkpoint` to add new databases to an existing xcluster checkpoint between two clusters. This command needs to be run from the source cluster of the replication. + +For example, to add new databases to xcluster replication, first checkpoint them using the following command: + +```sh +./bin/yugabyted xcluster add_to_checkpoint \ + --replication_id \ + --databases +``` + +The `add_to_checkpoint` command takes a snapshot of the database and determines whether any of the databases to added to the replication need to be copied to the target ([bootstrapped](#bootstrap-databases-for-xcluster)). + +##### add_to_checkpoint flags + +-h | --help +: Print the command-line help and exit. + +--base_dir *base-directory* +: The base directory for the yugabyted server. + +--databases *xcluster-databases* +: Comma separated list of databases to be added to existing replication. + +--replication_id *xcluster-replication-id* +: Replication ID of the xcluster replication to which database(s) is to be added. + #### set_up Use the sub-command `yugabyted xcluster set_up` to set up xCluster replication between two clusters. Run this command from the source cluster of the replication. @@ -942,16 +978,45 @@ Use the sub-command `yugabyted xcluster set_up` to set up xCluster replication b For example, to set up xCluster replication between two clusters, run the following command from a node on the source cluster: ```sh -./bin/yugabyted xcluster set_up --target_address --replication_id +./bin/yugabyted xcluster set_up \ + --target_address \ + --replication_id \ + --bootstrap_done ``` -If bootstrap was required for any database, add the `--bootstrap_done` flag after completing the bootstrapping steps: +##### set_up flags + +-h | --help +: Print the command-line help and exit. + +--base_dir *base-directory* +: The base directory for the yugabyted server. + +--target_address *xcluster-target-address* +: IP address of a node in the target cluster. + +--replication_id *xcluster-replication-id* +: The replication ID of the xCluster replication to be set up. + +--bootstrap_done *xcluster-bootstrap-done* +: This flag indicates that bootstrapping step has been completed. +: After running `yugabyted xcluster create_checkpoint` for an xCluster replication, yugabyted outputs a message with database(s) for which bootstrapping is required along with the commands required for bootstrapping. + +#### add_to_replication + +Use the sub-command `yugabyted xcluster add_to_replication` to add databases to an existing xCluster replication between two clusters. Run this command from the source cluster of the replication. + +For example, to add new databases to an existing xCluster replication between two clusters, run the following command from a node on the source cluster: ```sh -./bin/yugabyted xcluster set_up --target_address --replication_id --bootstrap_done +./bin/yugabyted xcluster add_to_replication \ + --databases \ + --target_address \ + --replication_id \ + --bootstrap_done ``` -##### set_up flags +##### add_to_replication flags -h | --help : Print the command-line help and exit. @@ -963,11 +1028,14 @@ If bootstrap was required for any database, add the `--bootstrap_done` flag afte : IP address of a node in the target cluster. --replication_id *xcluster-replication-id* -: The replication ID of the xCluster replication to be set up. +: Replication ID of the xcluster replication to which database(s) is to be added. + +--databases *xcluster-databases* +: Comma separated list of databases to be added to existing replication. --bootstrap_done *xcluster-bootstrap-done* : This flag indicates that bootstrapping step has been completed. -: After running `yugabyted xcluster checkpoint` for an xCluster replication, if bootstrapping is required for any database, yugabyted outputs a message `Bootstrap is required for database(s)` along with the commands required for bootstrapping. +: After running `yugabyted xcluster add_to_checkpoint` for the databases, yugabyted outputs a message with database(s) for which bootstrapping is required along with the commands required for bootstrapping. #### status @@ -997,17 +1065,19 @@ To display the status of a specific xCluster replication, run the following comm : The replication ID of the xCluster replication whose status you want to output. : Optional. If not specified, the status of all replications for the cluster is displayed. -#### delete +#### delete_replication -Use the sub-command `yugabyted xcluster delete` to delete an existing xCluster replication. Run this command from the source cluster. +Use the sub-command `yugabyted xcluster delete_replication` to delete an existing xCluster replication. Run this command from the source cluster. For example, delete an xCluster replication using the following command: ```sh -./bin/yugabyted xcluster delete --replication_id --target_address +./bin/yugabyted xcluster delete_replication \ + --replication_id \ + --target_address ``` -##### delete flags +##### delete_replication flags -h | --help : Print the command-line help and exit. @@ -1017,11 +1087,41 @@ For example, delete an xCluster replication using the following command: --target_address *xcluster-target-address* : IP address of a node in the target cluster. -: If the target is not available, the output of `yugabyted xcluster delete` will include the command that you will need to run on the target cluster (after bringing it back up) to remove the replication from the target. +: If the target is not available, the output of `yugabyted xcluster delete_replication` will include the command that you will need to run on the target cluster (after bringing it back up) to remove the replication from the target. --replication_id *xcluster-replication-id* : The replication ID of the xCluster replication to delete. +#### remove_database_from_replication + +Use the sub-command `yugabyted xcluster remove_database_from_replication` to remove database(s) from existing xCluster replication. Run this command from the source cluster. + +For example, remove a database from an xCluster replication using the following command: + +```sh +./bin/yugabyted xcluster remove_database_from_replication \ + --databases \ + --replication_id \ + --target_address +``` + +##### remove_database_from_replication flags + +-h | --help +: Print the command-line help and exit. + +--base_dir *base-directory* +: The base directory for the yugabyted server. + +--target_address *xcluster-target-address* +: IP address of a node in the target cluster. + +--replication_id *xcluster-replication-id* +: Replication ID of the xcluster replication from which database(s) is to be removed. + +--databases *xcluster-databases* +: Comma separated list of databases to be removed from existing replication. + ----- ## Environment variables @@ -1756,10 +1856,10 @@ To set up xCluster replication between two secure clusters, do the following: 1. Checkpoint the xCluster replication from the source cluster. - Run the `yugabyted xcluster checkpoint` command from any source cluster node, with the `--replication_id` and `--databases` flags. For `--replication_id`, provide a string to uniquely identify this replication. The `--databases` flag takes a comma-separated list of databases to be replicated. + Run the `yugabyted xcluster create_checkpoint` command from any source cluster node, with the `--replication_id` and `--databases` flags. For `--replication_id`, provide a string to uniquely identify this replication. The `--databases` flag takes a comma-separated list of databases to be replicated. ```sh - ./bin/yugabyted xcluster checkpoint \ + ./bin/yugabyted xcluster create_checkpoint \ --replication_id= \ --databases= ``` @@ -1776,13 +1876,7 @@ To set up xCluster replication between two secure clusters, do the following: Provide the `--replication_id` you created in step 1, along with the `--target_address`, which is the IP address of any node in the target cluster node. - ```sh - ./bin/yugabyted xcluster set_up \ - --replication_id= \ - --target_address= - ``` - - If any of the databases to be replicated has data, complete the bootstrapping (directions are provided in the output of `yugabyted xcluster checkpoint`) and add the `--bootstrap_done` flag in the command. For example: + If any of the databases to be replicated has data, complete the bootstrapping (directions are provided in the output of `yugabyted xcluster create_checkpoint`). If the database(s) doesn't have any data, complete the DB schema creation on the target cluster. Then run: ```sh ./bin/yugabyted xcluster set_up \ @@ -1791,8 +1885,6 @@ To set up xCluster replication between two secure clusters, do the following: --bootstrap_done ``` - The `--bootstrap_done` flag is not needed if the databases to be replicated do not have any data. - {{% /tab %}} {{% tab header="Insecure clusters" lang="basic-2" %}} @@ -1801,10 +1893,10 @@ To set up xCluster replication between two clusters, do the following: 1. Checkpoint the xCluster replication from source cluster. - Run the `yugabyted xcluster checkpoint` command from any source cluster node, with the `--replication_id` and `--databases` flags. For `--replication_id`, provide a string to uniquely identify this replication. The `--databases` flag takes a comma-separated list of databases to be replicated. + Run the `yugabyted xcluster create_checkpoint` command from any source cluster node, with the `--replication_id` and `--databases` flags. For `--replication_id`, provide a string to uniquely identify this replication. The `--databases` flag takes a comma-separated list of databases to be replicated. ```sh - ./bin/yugabyted xcluster checkpoint \ + ./bin/yugabyted xcluster create_checkpoint \ --replication_id= \ --databases= ``` @@ -1815,13 +1907,7 @@ To set up xCluster replication between two clusters, do the following: Provide the `--replication_id` you created in step 1, along with the `--target_address`, which is the IP address of any node in the target cluster node. - ```sh - ./bin/yugabyted xcluster set_up \ - --replication_id= \ - --target_address= - ``` - - If any of the databases to be replicated has data, complete the bootstrapping (directions are provided in the output of `yugabyted xcluster checkpoint`) and add the `--bootstrap_done` flag in the command. For example: + If any of the databases to be replicated has data, complete the bootstrapping (directions are provided in the output of `yugabyted xcluster create_checkpoint`). If the database(s) doesn't have any data, complete the DB schema creation on the target cluster. Then run: ```sh ./bin/yugabyted xcluster set_up \ @@ -1830,18 +1916,16 @@ To set up xCluster replication between two clusters, do the following: --bootstrap_done ``` - The `--bootstrap_done` flag is not needed if the databases to be replicated do not have any data. - {{% /tab %}} {{< /tabpane >}} #### Bootstrap databases for xCluster -After running `yugabyted xcluster checkpoint`, you must bootstrap the databases before you can set up the xCluster replication. Bootstrapping is the process of preparing the databases on the target cluster for replication, and involves the following: +After running `yugabyted xcluster create_checkpoint`, you must bootstrap the databases before you can set up the xCluster replication. Bootstrapping is the process of preparing the databases on the target cluster for replication, and involves the following: - For databases that don't have any data, apply the same database and schema to the target cluster. -- For databases that do have data, you need to back up the databases on the source, and restore to the target. The commands to do this are provided in the output of the `yugabyted xcluster checkpoint` command. +- For databases that do have data, you need to back up the databases on the source, and restore to the target. The commands to do this are provided in the output of the `yugabyted xcluster create_checkpoint` command. If the cluster was not started using the `--backup_daemon` flag, you must manually complete the backup and restore using [distributed snapshots](../../../manage/backup-restore/snapshot-ysql/). @@ -1853,14 +1937,55 @@ After setting up the replication between the clusters, you can display the repli ./bin/yugabyted xcluster status ``` -To delete an xCluster replication, use the `yugabyted xcluster delete` command as follows: +To delete an xCluster replication, use the `yugabyted xcluster delete_replication` command as follows: + +```sh +./bin/yugabyted xcluster delete_replication \ + --replication_id= \ + --target_address= +``` + +#### Add databases to an existing xCluster replication + +After setting up the replication between the clusters, you can add new databases to it using the `yugabyted xcluster add_to_checkpoint` and `yugabyted xcluster add_to_replication` commands. + +1. Add databases to xCluster replication checkpoint from the source cluster. + + Run the yugabyted xcluster add_to_checkpoint command from any source cluster node, with the --replication_id and --databases flags. For --replication_id, provide the replication_id of the xcluster replication to which the databases are to be added. The --databases flag takes a comma-separated list of databases to be added. + + ```sh + ./bin/yugabyted xcluster add_to_checkpoint --replication_id --databases + ``` + +1. [Bootstrap](#bootstrap-databases-for-xcluster) the databases that you included in the replication. + +1. Add the databases to the xCluster replication by running the `yugabyted xcluster add_to_replication` command from any of the source cluster nodes. + + Provide the `--replication_id` of the xcluster replication to which the databases are to be added, along with the `--target_address`, which is the IP address of any node in the target cluster node. Use `--databases` flag to give the list of databases to be added. + + If any of the databases to be added to the replication has data, complete the bootstrapping (directions are provided in the output of `yugabyted xcluster add_to_checkpoint`). If the database(s) doesn't have any data, complete the DB schema creation on the target cluster. Then run: + + ```sh + ./bin/yugabyted xcluster add_to_replication \ + --databases \ + --replication_id= \ + --target_address= \ + --bootstrap_done + ``` + +#### Remove databases from an existing xCluster replication + +To remove databases from an existing xCluster replication, use the `yugabyted xcluster remove_database_from_replication` command as follows: ```sh -./bin/yugabyted xcluster delete \ +./bin/yugabyted xcluster remove_database_from_replication \ --replication_id= \ + --databases \ --target_address= ``` +Provide the `--replication_id` of the xcluster replication from which the databases are to be removed, along with the `--target_address`, which is the IP address of any node in the target cluster node. Use `--databases` flag to give the list of databases to be removed. + ### Pass additional flags to YB-Master and YB-TServer You can set additional configuration options for the YB-Master and YB-TServer processes using the `--master_flags` and `--tserver_flags` flags. diff --git a/docs/content/stable/reference/configuration/yugabyted.md b/docs/content/stable/reference/configuration/yugabyted.md index c6bcb1409e7b..2f8d4185b993 100644 --- a/docs/content/stable/reference/configuration/yugabyted.md +++ b/docs/content/stable/reference/configuration/yugabyted.md @@ -710,12 +710,6 @@ Create a single-node locally and join other nodes that are part of the same clus --base_dir *base-directory* : The directory where yugabyted stores data, configurations, and logs. Must be an absolute path. ---data_dir *data-directory* -: The directory where yugabyted stores data. Must be an absolute path. Can be configured to a directory different from the one where configurations and logs are stored. - ---log_dir *log-directory* -: The directory to store yugabyted logs. Must be an absolute path. This flag controls where the logs of the YugabyteDB nodes are stored. By default, logs are written to `~/var/logs`. - --background *bool* : Enable or disable running yugabyted in the background as a daemon. Does not persist on restart. Default: `true` @@ -747,7 +741,7 @@ For on-premises deployments, consider racks as zones to treat them as fault doma : Enable or disable the backup daemon with yugabyted start. Default: `false` : If you start a cluster using the `--backup_daemon` flag, you also need to download and extract the [YB Controller release](https://downloads.yugabyte.com/ybc/2.1.0.0-b9/ybc-2.1.0.0-b9-linux-x86_64.tar.gz) to the yugabyte-{{< yb-version version="stable" >}} release directory. ---enable_pg_parity_tech_preview *PostgreSQL-compatibilty* +--enable_pg_parity_early_access *PostgreSQL-compatibilty* : Enable Enhanced PostgreSQL Compatibility Mode. Default: `false` #### Advanced flags @@ -778,6 +772,15 @@ Advanced flags can be set by using the configuration file in the `--config` flag --callhome *bool* : Enable or disable the *call home* feature that sends analytics data to Yugabyte. Default: `true`. +--data_dir *data-directory* +: The directory where yugabyted stores data. Must be an absolute path. Can be configured to a directory different from the one where configurations and logs are stored. + +--log_dir *log-directory* +: The directory to store yugabyted logs. Must be an absolute path. This flag controls where the logs of the YugabyteDB nodes are stored. By default, logs are written to `~//logs`. + +--certs_dir *certs-directory* +: The path to the directory which has the certificates to be used for secure deployment. Must be an absolute path. Default path is `~//certs`. + --master_flags *master_flags* : Specify extra [master flags](../../../reference/configuration/yb-master#configuration-flags) as a set of key value pairs. Format (key=value,key=value). : To specify any CSV value flags, enclose the values inside curly braces `{}`. Refer to [Pass additional flags to YB-Master and YB-TServer](#pass-additional-flags-to-yb-master-and-yb-tserver). @@ -900,24 +903,29 @@ Usage: yugabyted xcluster [command] [flags] The following sub-commands are available for the `yugabyted xcluster` command: -- [checkpoint](#checkpoint) +- [create_checkpoint](#create-checkpoint) +- [add_to_checkpoint](#add-to-checkpoint) - [set_up](#set-up) +- [add_to_replication](#add-to-replication) - [status](#status-1) -- [delete](#delete-1) +- [delete_replication](#delete-replication) +- [remove_database_from_replication](#remove-database-from-replication) -#### checkpoint +#### create_checkpoint -Use the sub-command `yugabyted xcluster checkpoint` to checkpoint a new xCluster replication between two clusters. This command needs to be run from the source cluster of the replication. +Use the sub-command `yugabyted xcluster create_checkpoint` to checkpoint a new xCluster replication between two clusters. This command needs to be run from the source cluster of the replication. For example, to create a new xCluster replication, execute the following command: ```sh -./bin/yugabyted xcluster checkpoint --replication_id --databases +./bin/yugabyted xcluster create_checkpoint \ + --replication_id \ + --databases ``` -The `checkpoint` command takes a snapshot of the database and determines whether any of the databases to be replicated need to be copied to the target ([bootstrapped](#bootstrap-databases-for-xcluster)). If bootstrapping is required for any database, yugabyted outputs a message `Bootstrap is required for database(s)` along with the commands required for bootstrapping. +The `create_checkpoint` command takes a snapshot of the database and determines whether any of the databases to be replicated need to be copied to the target ([bootstrapped](#bootstrap-databases-for-xcluster)). -##### checkpoint flags +##### create_checkpoint flags -h | --help : Print the command-line help and exit. @@ -931,6 +939,34 @@ The `checkpoint` command takes a snapshot of the database and determines whether --replication_id *xcluster-replication-id* : A string to uniquely identify the replication. +#### add_to_checkpoint + +Use the sub-command `yugabyted xcluster add_to_checkpoint` to add new databases to an existing xcluster checkpoint between two clusters. This command needs to be run from the source cluster of the replication. + +For example, to add new databases to xcluster replication, first checkpoint them using the following command: + +```sh +./bin/yugabyted xcluster add_to_checkpoint \ + --replication_id \ + --databases +``` + +The `add_to_checkpoint` command takes a snapshot of the database and determines whether any of the databases to added to the replication need to be copied to the target ([bootstrapped](#bootstrap-databases-for-xcluster)). + +##### add_to_checkpoint flags + +-h | --help +: Print the command-line help and exit. + +--base_dir *base-directory* +: The base directory for the yugabyted server. + +--databases *xcluster-databases* +: Comma separated list of databases to be added to existing replication. + +--replication_id *xcluster-replication-id* +: Replication ID of the xcluster replication to which database(s) is to be added. + #### set_up Use the sub-command `yugabyted xcluster set_up` to set up xCluster replication between two clusters. Run this command from the source cluster of the replication. @@ -938,16 +974,45 @@ Use the sub-command `yugabyted xcluster set_up` to set up xCluster replication b For example, to set up xCluster replication between two clusters, run the following command from a node on the source cluster: ```sh -./bin/yugabyted xcluster set_up --target_address --replication_id +./bin/yugabyted xcluster set_up \ + --target_address \ + --replication_id \ + --bootstrap_done ``` -If bootstrap was required for any database, add the `--bootstrap_done` flag after completing the bootstrapping steps: +##### set_up flags + +-h | --help +: Print the command-line help and exit. + +--base_dir *base-directory* +: The base directory for the yugabyted server. + +--target_address *xcluster-target-address* +: IP address of a node in the target cluster. + +--replication_id *xcluster-replication-id* +: The replication ID of the xCluster replication to be set up. + +--bootstrap_done *xcluster-bootstrap-done* +: This flag indicates that bootstrapping step has been completed. +: After running `yugabyted xcluster create_checkpoint` for an xCluster replication, yugabyted outputs a message with database(s) for which bootstrapping is required along with the commands required for bootstrapping. + +#### add_to_replication + +Use the sub-command `yugabyted xcluster add_to_replication` to add databases to an existing xCluster replication between two clusters. Run this command from the source cluster of the replication. + +For example, to add new databases to an existing xCluster replication between two clusters, run the following command from a node on the source cluster: ```sh -./bin/yugabyted xcluster set_up --target_address --replication_id --bootstrap_done +./bin/yugabyted xcluster add_to_replication \ + --databases \ + --target_address \ + --replication_id \ + --bootstrap_done ``` -##### set_up flags +##### add_to_replication flags -h | --help : Print the command-line help and exit. @@ -959,11 +1024,14 @@ If bootstrap was required for any database, add the `--bootstrap_done` flag afte : IP address of a node in the target cluster. --replication_id *xcluster-replication-id* -: The replication ID of the xCluster replication to be set up. +: Replication ID of the xcluster replication to which database(s) is to be added. + +--databases *xcluster-databases* +: Comma separated list of databases to be added to existing replication. --bootstrap_done *xcluster-bootstrap-done* : This flag indicates that bootstrapping step has been completed. -: After running `yugabyted xcluster checkpoint` for an xCluster replication, if bootstrapping is required for any database, yugabyted outputs a message `Bootstrap is required for database(s)` along with the commands required for bootstrapping. +: After running `yugabyted xcluster add_to_checkpoint` for the databases, yugabyted outputs a message with database(s) for which bootstrapping is required along with the commands required for bootstrapping. #### status @@ -993,17 +1061,19 @@ To display the status of a specific xCluster replication, run the following comm : The replication ID of the xCluster replication whose status you want to output. : Optional. If not specified, the status of all replications for the cluster is displayed. -#### delete +#### delete_replication -Use the sub-command `yugabyted xcluster delete` to delete an existing xCluster replication. Run this command from the source cluster. +Use the sub-command `yugabyted xcluster delete_replication` to delete an existing xCluster replication. Run this command from the source cluster. For example, delete an xCluster replication using the following command: ```sh -./bin/yugabyted xcluster delete --replication_id --target_address +./bin/yugabyted xcluster delete_replication \ + --replication_id \ + --target_address ``` -##### delete flags +##### delete_replication flags -h | --help : Print the command-line help and exit. @@ -1013,11 +1083,41 @@ For example, delete an xCluster replication using the following command: --target_address *xcluster-target-address* : IP address of a node in the target cluster. -: If the target is not available, the output of `yugabyted xcluster delete` will include the command that you will need to run on the target cluster (after bringing it back up) to remove the replication from the target. +: If the target is not available, the output of `yugabyted xcluster delete_replication` will include the command that you will need to run on the target cluster (after bringing it back up) to remove the replication from the target. --replication_id *xcluster-replication-id* : The replication ID of the xCluster replication to delete. +#### remove_database_from_replication + +Use the sub-command `yugabyted xcluster remove_database_from_replication` to remove database(s) from existing xCluster replication. Run this command from the source cluster. + +For example, remove a database from an xCluster replication using the following command: + +```sh +./bin/yugabyted xcluster remove_database_from_replication \ + --databases \ + --replication_id \ + --target_address +``` + +##### remove_database_from_replication flags + +-h | --help +: Print the command-line help and exit. + +--base_dir *base-directory* +: The base directory for the yugabyted server. + +--target_address *xcluster-target-address* +: IP address of a node in the target cluster. + +--replication_id *xcluster-replication-id* +: Replication ID of the xcluster replication from which database(s) is to be removed. + +--databases *xcluster-databases* +: Comma separated list of databases to be removed from existing replication. + ----- ## Environment variables @@ -1752,10 +1852,10 @@ To set up xCluster replication between two secure clusters, do the following: 1. Checkpoint the xCluster replication from the source cluster. - Run the `yugabyted xcluster checkpoint` command from any source cluster node, with the `--replication_id` and `--databases` flags. For `--replication_id`, provide a string to uniquely identify this replication. The `--databases` flag takes a comma-separated list of databases to be replicated. + Run the `yugabyted xcluster create_checkpoint` command from any source cluster node, with the `--replication_id` and `--databases` flags. For `--replication_id`, provide a string to uniquely identify this replication. The `--databases` flag takes a comma-separated list of databases to be replicated. ```sh - ./bin/yugabyted xcluster checkpoint \ + ./bin/yugabyted xcluster create_checkpoint \ --replication_id= \ --databases= ``` @@ -1772,13 +1872,7 @@ To set up xCluster replication between two secure clusters, do the following: Provide the `--replication_id` you created in step 1, along with the `--target_address`, which is the IP address of any node in the target cluster node. - ```sh - ./bin/yugabyted xcluster set_up \ - --replication_id= \ - --target_address= - ``` - - If any of the databases to be replicated has data, complete the bootstrapping (directions are provided in the output of `yugabyted xcluster checkpoint`) and add the `--bootstrap_done` flag in the command. For example: + If any of the databases to be replicated has data, complete the bootstrapping (directions are provided in the output of `yugabyted xcluster create_checkpoint`). If the database(s) doesn't have any data, complete the DB schema creation on the target cluster. Then run: ```sh ./bin/yugabyted xcluster set_up \ @@ -1787,8 +1881,6 @@ To set up xCluster replication between two secure clusters, do the following: --bootstrap_done ``` - The `--bootstrap_done` flag is not needed if the databases to be replicated do not have any data. - {{% /tab %}} {{% tab header="Insecure clusters" lang="basic-2" %}} @@ -1797,10 +1889,10 @@ To set up xCluster replication between two clusters, do the following: 1. Checkpoint the xCluster replication from source cluster. - Run the `yugabyted xcluster checkpoint` command from any source cluster node, with the `--replication_id` and `--databases` flags. For `--replication_id`, provide a string to uniquely identify this replication. The `--databases` flag takes a comma-separated list of databases to be replicated. + Run the `yugabyted xcluster create_checkpoint` command from any source cluster node, with the `--replication_id` and `--databases` flags. For `--replication_id`, provide a string to uniquely identify this replication. The `--databases` flag takes a comma-separated list of databases to be replicated. ```sh - ./bin/yugabyted xcluster checkpoint \ + ./bin/yugabyted xcluster create_checkpoint \ --replication_id= \ --databases= ``` @@ -1811,13 +1903,7 @@ To set up xCluster replication between two clusters, do the following: Provide the `--replication_id` you created in step 1, along with the `--target_address`, which is the IP address of any node in the target cluster node. - ```sh - ./bin/yugabyted xcluster set_up \ - --replication_id= \ - --target_address= - ``` - - If any of the databases to be replicated has data, complete the bootstrapping (directions are provided in the output of `yugabyted xcluster checkpoint`) and add the `--bootstrap_done` flag in the command. For example: + If any of the databases to be replicated has data, complete the bootstrapping (directions are provided in the output of `yugabyted xcluster create_checkpoint`). If the database(s) doesn't have any data, complete the DB schema creation on the target cluster. Then run: ```sh ./bin/yugabyted xcluster set_up \ @@ -1826,18 +1912,16 @@ To set up xCluster replication between two clusters, do the following: --bootstrap_done ``` - The `--bootstrap_done` flag is not needed if the databases to be replicated do not have any data. - {{% /tab %}} {{< /tabpane >}} #### Bootstrap databases for xCluster -After running `yugabyted xcluster checkpoint`, you must bootstrap the databases before you can set up the xCluster replication. Bootstrapping is the process of preparing the databases on the target cluster for replication, and involves the following: +After running `yugabyted xcluster create_checkpoint`, you must bootstrap the databases before you can set up the xCluster replication. Bootstrapping is the process of preparing the databases on the target cluster for replication, and involves the following: - For databases that don't have any data, apply the same database and schema to the target cluster. -- For databases that do have data, you need to back up the databases on the source, and restore to the target. The commands to do this are provided in the output of the `yugabyted xcluster checkpoint` command. +- For databases that do have data, you need to back up the databases on the source, and restore to the target. The commands to do this are provided in the output of the `yugabyted xcluster create_checkpoint` command. If the cluster was not started using the `--backup_daemon` flag, you must manually complete the backup and restore using [distributed snapshots](../../../manage/backup-restore/snapshot-ysql/). @@ -1849,14 +1933,55 @@ After setting up the replication between the clusters, you can display the repli ./bin/yugabyted xcluster status ``` -To delete an xCluster replication, use the `yugabyted xcluster delete` command as follows: +To delete an xCluster replication, use the `yugabyted xcluster delete_replication` command as follows: + +```sh +./bin/yugabyted xcluster delete_replication \ + --replication_id= \ + --target_address= +``` + +#### Add databases to an existing xCluster replication + +After setting up the replication between the clusters, you can add new databases to it using the `yugabyted xcluster add_to_checkpoint` and `yugabyted xcluster add_to_replication` commands. + +1. Add databases to xCluster replication checkpoint from the source cluster. + + Run the yugabyted xcluster add_to_checkpoint command from any source cluster node, with the --replication_id and --databases flags. For --replication_id, provide the replication_id of the xcluster replication to which the databases are to be added. The --databases flag takes a comma-separated list of databases to be added. + + ```sh + ./bin/yugabyted xcluster add_to_checkpoint --replication_id --databases + ``` + +1. [Bootstrap](#bootstrap-databases-for-xcluster) the databases that you included in the replication. + +1. Add the databases to the xCluster replication by running the `yugabyted xcluster add_to_replication` command from any of the source cluster nodes. + + Provide the `--replication_id` of the xcluster replication to which the databases are to be added, along with the `--target_address`, which is the IP address of any node in the target cluster node. Use `--databases` flag to give the list of databases to be added. + + If any of the databases to be added to the replication has data, complete the bootstrapping (directions are provided in the output of `yugabyted xcluster add_to_checkpoint`). If the database(s) doesn't have any data, complete the DB schema creation on the target cluster. Then run: + + ```sh + ./bin/yugabyted xcluster add_to_replication \ + --databases \ + --replication_id= \ + --target_address= \ + --bootstrap_done + ``` + +#### Remove databases from an existing xCluster replication + +To remove databases from an existing xCluster replication, use the `yugabyted xcluster remove_database_from_replication` command as follows: ```sh -./bin/yugabyted xcluster delete \ +./bin/yugabyted xcluster remove_database_from_replication \ --replication_id= \ + --databases \ --target_address= ``` +Provide the `--replication_id` of the xcluster replication from which the databases are to be removed, along with the `--target_address`, which is the IP address of any node in the target cluster node. Use `--databases` flag to give the list of databases to be removed. + ### Pass additional flags to YB-Master and YB-TServer You can set additional configuration options for the YB-Master and YB-TServer processes using the `--master_flags` and `--tserver_flags` flags. diff --git a/docs/content/v2.20/reference/configuration/yugabyted.md b/docs/content/v2.20/reference/configuration/yugabyted.md index 07010a6718c9..e85d6cbe1ed1 100644 --- a/docs/content/v2.20/reference/configuration/yugabyted.md +++ b/docs/content/v2.20/reference/configuration/yugabyted.md @@ -68,153 +68,91 @@ $ ./bin/yugabyted start -h The following commands are available: -- [start](#start) -- [configure](#configure) - [cert](#cert) -- [stop](#stop) -- [destroy](#destroy) -- [status](#status) -- [version](#version) - [collect_logs](#collect-logs) +- [configure](#configure) +- [configure_read_replica](#configure-read-replica) - [connect](#connect) - [demo](#demo) +- [destroy](#destroy) +- [start](#start) +- [status](#status) +- [stop](#stop) +- [version](#version) ----- -### start - -Use the `yugabyted start` command to start a one-node YugabyteDB cluster for running [YSQL](../../../architecture/layered-architecture/#yugabyte-sql-ysql) and [YCQL](../../../architecture/layered-architecture/#yugabyte-cloud-ql-ycql) workloads in your local environment. +### cert -Note that to use encryption in transit, OpenSSL must be installed on the nodes. +Use the `yugabyted cert` command to create TLS/SSL certificates for deploying a secure YugabyteDB cluster. #### Syntax ```text -Usage: yugabyted start [flags] +Usage: yugabyted cert [command] [flags] ``` -Examples: +#### Commands -- Create a local single-node cluster: +The following sub-commands are available for the `yugabyted cert` command: - ```sh - ./bin/yugabyted start - ``` +- [generate_server_certs](#generate-server-certs) -- Create a local single-node cluster with encryption in transit and authentication: +#### generate_server_certs - ```sh - ./bin/yugabyted start --secure - ``` +Use the `yugabyted cert generate_server_certs` sub-command to generate keys and certificates for the specified hostnames. -- Create a single-node locally and join other nodes that are part of the same cluster: +For example, to create node server certificates for hostnames 127.0.0.1, 127.0.0.2, 127.0.0.3, execute the following command: - ```sh - ./bin/yugabyted start --join=host:port,[host:port] - ``` +```sh +./bin/yugabyted cert generate_server_certs --hostnames=127.0.0.1,127.0.0.2,127.0.0.3 +``` #### Flags -h | --help : Print the command-line help and exit. ---advertise_address *bind-ip* -: IP address or local hostname on which yugabyted will listen. - ---join *master-ip* -: The IP address of the existing yugabyted server that the new yugabyted server will join, or if the server was restarted, rejoin. +--hostnames *hostnames* +: Hostnames of the nodes to be added in the cluster. Mandatory flag. ---config *config-file* -: Yugabyted configuration file path. Refer to [Advanced flags](#advanced-flags). +--data_dir *data-directory* +: The data directory for the yugabyted server. --base_dir *base-directory* -: The directory where yugabyted stores data, configurations, and logs. Must be an absolute path. - ---data_dir *data-directory* -: The directory where yugabyted stores data. Must be an absolute path. Can be configured to a directory different from the one where configurations and logs are stored. +: The base directory for the yugabyted server. --log_dir *log-directory* -: The directory to store yugabyted logs. Must be an absolute path. This flag controls where the logs of the YugabyteDB nodes are stored. By default, logs are written to `~/var/logs`. - ---background *bool* -: Enable or disable running yugabyted in the background as a daemon. Does not persist on restart. Default: `true` - ---cloud_location *cloud-location* -: Cloud location of the yugabyted node in the format `cloudprovider.region.zone`. This information is used for multi-zone, multi-region, and multi-cloud deployments of YugabyteDB clusters. - -{{}} -For on-premises deployments, consider racks as zones to treat them as fault domains. -{{}} - ---fault_tolerance *fault_tolerance* -: Determines the fault tolerance constraint to be applied on the data placement policy of the YugabyteDB cluster. This flag can accept the following values: none, zone, region, cloud. - ---ui *bool* -: Enable or disable the webserver UI (available at ). Default: `true` - ---secure -: Enable [encryption in transit](../../../secure/tls-encryption/) and [authentication](../../../secure/enable-authentication/ysql/) for the node. -: Encryption in transit requires SSL/TLS certificates for each node in the cluster. -: - When starting a local single-node cluster, a certificate is automatically generated for the cluster. -: - When deploying a node in a multi-node cluster, you need to generate the certificate for the node using the `--cert generate_server_certs` command and copy it to the node *before* you start the node using the `--secure` flag, or the node creation will fail. -: When authentication is enabled, the default user is `yugabyte` in YSQL, and `cassandra` in YCQL. When a cluster is started,`yugabyted` outputs a message `Credentials File is stored at ` with the credentials file location. -: For examples creating secure local multi-node, multi-zone, and multi-region clusters, refer to [Examples](#examples). - -#### Advanced flags - -Advanced flags can be set by using the configuration file in the `--config` flag. The advanced flags support for the `start` command is as follows: - ---ycql_port *ycql-port* -: The port on which YCQL will run. - ---ysql_port *ysql-port* -: The port on which YSQL will run. - ---master_rpc_port *master-rpc-port* -: The port on which YB-Master will listen for RPC calls. - ---tserver_rpc_port *tserver-rpc-port* -: The port on which YB-TServer will listen for RPC calls. - ---master_webserver_port *master-webserver-port* -: The port on which YB-Master webserver will run. +: The log directory for the yugabyted server. ---tserver_webserver_port *tserver-webserver-port* -: The port on which YB-TServer webserver will run. +----- ---webserver_port *webserver-port* -: The port on which main webserver will run. +### collect_logs ---callhome *bool* -: Enable or disable the *call home* feature that sends analytics data to Yugabyte. Default: `true`. +Use the `yugabyted collect_logs` command to generate a zipped file with all logs. ---master_flags *master_flags* -: Specify extra [master flags](../../../reference/configuration/yb-master#configuration-flags) as a set of key value pairs. Format (key=value,key=value). +#### Syntax ---tserver_flags *tserver_flags* -: Specify extra [tserver flags](../../../reference/configuration/yb-tserver#configuration-flags) as a set of key value pairs. Format (key=value,key=value). +```sh +Usage: yugabyted collect_logs [flags] +``` ---ysql_enable_auth *bool* -: Enable or disable YSQL authentication. Default: `false`. -: If the `YSQL_PASSWORD` [environment variable](#environment-variables) exists, then authentication mode is automatically set to `true`. +#### Flags ---use_cassandra_authentication *bool* -: Enable or disable YCQL authentication. Default: `false`. -: If the `YCQL_USER` or `YCQL_PASSWORD` [environment variables](#environment-variables) exist, then authentication mode is automatically set to `true`. -: Note that the corresponding environment variables have higher priority than the command-line flags. +-h | --help +: Print the command-line help and exit. ---initial_scripts_dir *initial-scripts-dir* -: The directory from where yugabyted reads initialization scripts. -: Script format - YSQL `.sql`, YCQL `.cql`. -: Initialization scripts are executed in sorted name order. +--stdout *stdout* +: Redirect the `logs.tar.gz` file's content to stdout. For example, `docker exec \ bin/yugabyted collect_logs --stdout > yugabyted.tar.gz` -#### Deprecated flags +--data_dir *data-directory* +: The data directory for the yugabyted server whose logs are desired. ---daemon *bool* -: Enable or disable running yugabyted in the background as a daemon. Does not persist on restart. Default: `true`. +--base_dir *base-directory* +: The base directory for the yugabyted server whose logs are desired. ---listen *bind-ip* -: The IP address or localhost name to which yugabyted will listen. +--log_dir *log-directory* +: The log directory for the yugabyted server whose logs are desired. ----- @@ -224,6 +162,7 @@ Use the `yugabyted configure` command to do the following: - Configure the data placement policy of the cluster. - Enable or disable encryption at rest. +- Run yb-admin commands on a cluster. #### Syntax @@ -233,14 +172,15 @@ Usage: yugabyted configure [command] [flags] #### Commands -The following subcommands are available for `yugabyted configure` command: +The following sub-commands are available for `yugabyted configure` command: - [data_placement](#data-placement) - [encrypt_at_rest](#encrypt-at-rest) +- [admin_operation](#admin-operation) #### data_placement -Use the `yugabyted configure data_placement` subcommand to set or modify placement policy of the nodes of the deployed cluster. +Use the `yugabyted configure data_placement` sub-command to set or modify placement policy of the nodes of the deployed cluster, and specify the [preferred region(s)](../../../architecture/key-concepts/#preferred-region). For example, you would use the following command to create a multi-zone YugabyteDB cluster: @@ -248,7 +188,7 @@ For example, you would use the following command to create a multi-zone Yugabyte ./bin/yugabyted configure data_placement --fault_tolerance=zone ``` -#### data_placement flags +##### data_placement flags -h | --help : Print the command-line help and exit. @@ -257,14 +197,11 @@ For example, you would use the following command to create a multi-zone Yugabyte : Specify the fault tolerance for the cluster. This flag can accept one of the following values: zone, region, cloud. For example, when the flag is set to zone (`--fault_tolerance=zone`), yugabyted applies zone fault tolerance to the cluster, placing the nodes in three different zones, if available. --constraint_value *data-placement-constraint-value* -: Specify the data placement for the YugabyteDB cluster. This is an optional flag. The flag takes comma-separated values in the format `cloud.region.zone`. +: Specify the data placement and preferred region(s) for the YugabyteDB cluster. This is an optional flag. The flag takes comma-separated values in the format `cloud.region.zone:priority`. The priority is an integer and is optional, and determines the preferred region(s) in order of preference. You must specify the same number of data placement values as the [replication factor](../../../architecture/key-concepts/#replication-factor-rf). --rf *replication-factor* : Specify the replication factor for the cluster. This is an optional flag which takes a value of `3` or `5`. ---config *config-file* -: The path to the configuration file of the yugabyted server. - --data_dir *data-directory* : The data directory for the yugabyted server. @@ -276,7 +213,7 @@ For example, you would use the following command to create a multi-zone Yugabyte #### encrypt_at_rest -Use the `yugabyted configure encrypt_at_rest` subcommand to enable or disable [encryption at rest](../../../secure/encryption-at-rest/) for the deployed cluster. +Use the `yugabyted configure encrypt_at_rest` sub-command to enable or disable [encryption at rest](../../../secure/encryption-at-rest/) for the deployed cluster. To use encryption at rest, OpenSSL must be installed on the nodes. @@ -292,7 +229,7 @@ To disable encryption at rest for a YugabyteDB cluster which has encryption at r ./bin/yugabyted configure encrypt_at_rest --disable ``` -#### encrypt_at_rest flags +##### encrypt_at_rest flags -h | --help : Print the command-line help and exit. @@ -303,9 +240,6 @@ To disable encryption at rest for a YugabyteDB cluster which has encryption at r --enable *enable* : Enable encryption at rest for the cluster. There is no need to set a value for the flag. Use `--enable` or `--disable` flag to toggle encryption features on a YugabyteDB cluster. ---config *config-file* -: The path to the configuration file of the yugabyted server. - --data_dir *data-directory* : The data directory for the yugabyted server. @@ -313,84 +247,209 @@ To disable encryption at rest for a YugabyteDB cluster which has encryption at r : The base directory for the yugabyted server. --log_dir *log-directory* -: : The log directory for the yugabyted server. +: The log directory for the yugabyted server. + +#### admin_operation + +Use the `yugabyted configure admin_operation` command to run a yb-admin command on the YugabyteDB cluster. + +For example, get the YugabyteDB universe configuration: + +```sh +./bin/yugabyted configure admin_operation --command 'get_universe_config' +``` + +##### admin_operation flags + +-h | --help +: Print the command-line help and exit. + +--data_dir *data-directory* +: The data directory for the yugabyted server. + +--command *yb-admin-command* +: Specify the yb-admin command to be executed on the YugabyteDB cluster. + +--master_addresses *master-addresses* +: Comma-separated list of current masters of the YugabyteDB cluster. ----- -### cert +### configure_read_replica -Use the `yugabyted cert` command to create TLS/SSL certificates for deploying a secure YugabyteDB cluster. +Use the `yugabyted configure_read_replica` command to configure, modify, or delete a [read replica cluster](../../../architecture/key-concepts/#read-replica-cluster). #### Syntax ```text -Usage: yugabyted cert [command] [flags] +Usage: yugabyted configure_read_replica [command] [flags] ``` #### Commands -The following subcommands are available for the `yugabyted cert` command: +The following sub-commands are available for the `yugabyted configure_read_replica` command: -- [generate_server_certs](#generate-server-certs) +- [new](#new) +- [modify](#modify) +- [delete](#delete) -#### generate_server_certs +#### new -Use the `yugabyted cert generate_server_certs` subcommand to generate keys and certificates for the specified hostnames. +Use the sub-command `yugabyted configure_read_replica new` to configure a new read replica cluster. -For example, to create node server certificates for hostnames 127.0.0.1, 127.0.0.2, 127.0.0.3, execute the following command: +For example, to create a new read replica cluster, execute the following command: ```sh -./bin/yugabyted cert generate_server_certs --hostnames=127.0.0.1,127.0.0.2,127.0.0.3 +./bin/yugabyted configure_read_replica new --rf=1 --data_placement_constraint=cloud1.region1.zone1 ``` -#### Flags +##### new flags -h | --help : Print the command-line help and exit. ---hostnames *hostnames* -: Hostnames of the nodes to be added in the cluster. Mandatory flag. +--base_dir *base-directory* +: The base directory for the yugabyted server. ---config *config-file* -: The path to the configuration file of the yugabyted server. +--rf *read-replica-replication-factor* +: Replication factor for the read replica cluster. ---data_dir *data-directory* -: The data directory for the yugabyted server. +--data_placement_constraint *read-replica-constraint-value* +: Data placement constraint value for the read replica cluster. This is an optional flag. The flag takes comma-separated values in the format `cloud.region.zone:num_of_replicas`. + +#### modify + +Use the sub-command `yugabyted configure_read_replica modify` to modify an existing read replica cluster. + +For example, modify a read replica cluster using the following commands. + +Change the replication factor of the existing read replica cluster: + +```sh +./bin/yugabyted configure_read_replica modify --rf=2 + +``` + +Change the replication factor and also specify the placement constraint: + +```sh +./bin/yugabyted configure_read_replica modify --rf=2 --data_placement_constraint=cloud1.region1.zone1,cloud2.region2.zone2 + +``` + +##### modify flags + +-h | --help +: Print the command-line help and exit. --base_dir *base-directory* : The base directory for the yugabyted server. ---log_dir *log-directory* -: The log directory for the yugabyted server. +--rf *read-replica-replication-factor* +: Replication factor for the read replica cluster. + +--data_placement_constraint *read-replica-constraint-value* +: Data placement constraint value for the read replica cluster. This is an optional flag. The flag takes comma-separated values in the format cloud.region.zone. + +#### delete + +Use the sub-command `yugabyted configure_read_replica delete` to delete an existing read replica cluster. + +For example, delete a read replica cluster using the following command: + +```sh +./bin/yugabyted configure_read_replica delete +``` + +##### delete flags + +-h | --help +: Print the command-line help and exit. + +--base_dir *base-directory* +: The base directory for the yugabyted server. ----- -### stop +### connect -Use the `yugabyted stop` command to stop a YugabyteDB cluster. +Use the `yugabyted connect` command to connect to the cluster using [ysqlsh](../../../admin/ysqlsh/) or [ycqlsh](../../../admin/ycqlsh). #### Syntax ```sh -Usage: yugabyted stop [flags] +Usage: yugabyted connect [command] [flags] ``` +#### Commands + +The following sub-commands are available for the `yugabyted connect` command: + +- [ysql](#ysql) +- [ycql](#ycql) + +#### ysql + +Use the `yugabyted connect ysql` sub-command to connect to YugabyteDB with [ysqlsh](../../../admin/ysqlsh/). + +#### ycql + +Use the `yugabyted connect ycql` sub-command to connect to YugabyteDB with [ycqlsh](../../../admin/ycqlsh). + #### Flags -h | --help : Print the command-line help and exit. ---config *config-file* -: The path to the configuration file of the yugabyted server that needs to be stopped. +--data_dir *data-directory* +: The data directory for the yugabyted server to connect to. + +--base_dir *base-directory* +: The base directory for the yugabyted server to connect to. + +--log_dir *log-directory* +: The log directory for the yugabyted server to connect to. + +----- + +### demo + +Use the `yugabyted demo` command to use the demo [Northwind sample dataset](../../../sample-data/northwind/) with YugabyteDB. + +#### Syntax + +```sh +Usage: yugabyted demo [command] [flags] +``` + +#### Commands + +The following sub-commands are available for the `yugabyted demo` command: + +- [connect](#connect-1) +- [destroy](#destroy-1) + +#### connect + +Use the `yugabyted demo connect` sub-command to load the [Northwind sample dataset](../../../sample-data/northwind/) into a new `yb_demo_northwind` SQL database, and then open the `ysqlsh` prompt for the same database. + +#### destroy + +Use the `yuagbyted demo destroy` sub-command to shut down the yugabyted single-node cluster and remove data, configuration, and log directories. This sub-command also deletes the `yb_demo_northwind` database. + +#### Flags + +-h | --help +: Print the help message and exit. --data_dir *data-directory* -: The data directory for the yugabyted server that needs to be stopped. +: The data directory for the yugabyted server to connect to or destroy. --base_dir *base-directory* -: The base directory for the yugabyted server that needs to be stopped. +: The base directory for the yugabyted server to connect to or destroy. --log_dir *log-directory* -: The log directory for the yugabyted server that needs to be stopped. +: The log directory for the yugabyted server to connect to or destroy. ----- @@ -404,91 +463,186 @@ Use the `yugabyted destroy` command to delete a cluster. Usage: yugabyted destroy [flags] ``` +For examples, see [Destroy a local cluster](#destroy-a-local-cluster). + #### Flags -h | --help : Print the command-line help and exit. ---config *config-file* -: The path to the configuration file of the yugabyted server that needs to be destroyed. +--data_dir *data-directory* +: The data directory for the yugabyted server that needs to be destroyed. + +--base_dir *base-directory* +: The base directory for the yugabyted server that needs to be destroyed. + +--log_dir *log-directory* +: The log directory for the yugabyted server that needs to be destroyed. + +----- + +### start + +Use the `yugabyted start` command to start a one-node YugabyteDB cluster for running [YSQL](../../../api/ysql) and [YCQL](../../../api/ycql) workloads in your local environment. + +To use encryption in transit, OpenSSL must be installed on the nodes. + +If you want to use backup and restore, start the node with `--backup_daemon=true` to initialize the backup and restore agent. You also need to download and extract the [YB Controller release](https://downloads.yugabyte.com/ybc/2.1.0.0-b9/ybc-2.1.0.0-b9-linux-x86_64.tar.gz) to the yugabyte-{{< yb-version version="v2.20" >}} release directory. + +#### Syntax + +```text +Usage: yugabyted start [flags] +``` + +Examples: + +Create a local single-node cluster: + +```sh +./bin/yugabyted start +``` + +Create a local single-node cluster with encryption in transit and authentication: + +```sh +./bin/yugabyted start --secure +``` + +Create a single-node locally and join other nodes that are part of the same cluster: + +```sh +./bin/yugabyted start --join=host:port,[host:port] +``` + +#### Flags + +-h | --help +: Print the command-line help and exit. + +--advertise_address *bind-ip* +: IP address or local hostname on which yugabyted will listen. + +--join *master-ip* +: The IP or DNS address of the existing yugabyted server that the new yugabyted server will join, or if the server was restarted, rejoin. The join flag accepts IP addresses, DNS names, or labels with correct [DNS syntax](https://en.wikipedia.org/wiki/Domain_Name_System#Domain_name_syntax,_internationalization) (that is, letters, numbers, and hyphens). + +--config *config-file* +: Yugabyted configuration file path. Refer to [Advanced flags](#advanced-flags). + +--base_dir *base-directory* +: The directory where yugabyted stores data, configurations, and logs. Must be an absolute path. + +--background *bool* +: Enable or disable running yugabyted in the background as a daemon. Does not persist on restart. Default: `true` + +--cloud_location *cloud-location* +: Cloud location of the yugabyted node in the format `cloudprovider.region.zone`. This information is used for multi-zone, multi-region, and multi-cloud deployments of YugabyteDB clusters. + +{{}} +For on-premises deployments, consider racks as zones to treat them as fault domains. +{{}} + +--fault_tolerance *fault_tolerance* +: Determines the fault tolerance constraint to be applied on the data placement policy of the YugabyteDB cluster. This flag can accept the following values: none, zone, region, cloud. + +--ui *bool* +: Enable or disable the webserver UI (available at ). Default: `true` + +--secure +: Enable [encryption in transit](../../../secure/tls-encryption/) and [authentication](../../../secure/enable-authentication/authentication-ysql/) for the node. +: Encryption in transit requires SSL/TLS certificates for each node in the cluster. +: - When starting a local single-node cluster, a certificate is automatically generated for the cluster. +: - When deploying a node in a multi-node cluster, you need to generate the certificate for the node using the `--cert generate_server_certs` command and copy it to the node *before* you start the node using the `--secure` flag, or the node creation will fail. +: When authentication is enabled, the default user is `yugabyte` in YSQL, and `cassandra` in YCQL. When a cluster is started,`yugabyted` outputs a message `Credentials File is stored at ` with the credentials file location. +: For examples creating secure local multi-node, multi-zone, and multi-region clusters, refer to [Examples](#examples). + +--read_replica *read_replica_node* +: Use this flag to start a read replica node. ---data_dir *data-directory* -: The data directory for the yugabyted server that needs to be destroyed. +--backup_daemon *backup-daemon-process* +: Enable or disable the backup daemon with yugabyted start. Default: `false` +: If you start a cluster using the `--backup_daemon` flag, you also need to download and extract the [YB Controller release](https://downloads.yugabyte.com/ybc/2.1.0.0-b9/ybc-2.1.0.0-b9-linux-x86_64.tar.gz) to the yugabyte-{{< yb-version version="v2.20" >}} release directory. ---base_dir *base-directory* -: The base directory for the yugabyted server that needs to be destroyed. +--enable_pg_parity_early_access *PostgreSQL-compatibilty* +: Enable Enhanced PostgreSQL Compatibility Mode. Default: `false` ---log_dir *log-directory* -: The log directory for the yugabyted server that needs to be destroyed. +#### Advanced flags ------ +Advanced flags can be set by using the configuration file in the `--config` flag. The advanced flags support for the `start` command is as follows: -### status +--ycql_port *ycql-port* +: The port on which YCQL will run. -Use the `yugabyted status` command to check the status. +--ysql_port *ysql-port* +: The port on which YSQL will run. -#### Syntax +--master_rpc_port *master-rpc-port* +: The port on which YB-Master will listen for RPC calls. -```sh -Usage: yugabyted status [flags] -``` +--tserver_rpc_port *tserver-rpc-port* +: The port on which YB-TServer will listen for RPC calls. -#### Flags +--master_webserver_port *master-webserver-port* +: The port on which YB-Master webserver will run. --h | --help -: Print the command-line help and exit. +--tserver_webserver_port *tserver-webserver-port* +: The port on which YB-TServer webserver will run. ---config *config-file* -: The path to the configuration file of the yugabyted server whose status is desired. +--webserver_port *webserver-port* +: The port on which main webserver will run. ---data_dir *data-directory* -: The data directory for the yugabyted server whose status is desired. +--callhome *bool* +: Enable or disable the *call home* feature that sends analytics data to Yugabyte. Default: `true`. ---base_dir *base-directory* -: The base directory for the yugabyted server whose status is desired. +--data_dir *data-directory* +: The directory where yugabyted stores data. Must be an absolute path. Can be configured to a directory different from the one where configurations and logs are stored. --log_dir *log-directory* -: The log directory for the yugabyted server whose status is desired. - ------ - -### version +: The directory to store yugabyted logs. Must be an absolute path. This flag controls where the logs of the YugabyteDB nodes are stored. By default, logs are written to `~//logs`. -Use the `yugabyted version` command to check the version number. +--certs_dir *certs-directory* +: The path to the directory which has the certificates to be used for secure deployment. Must be an absolute path. Default path is `~//certs`. -#### Syntax +--master_flags *master_flags* +: Specify extra [master flags](../../../reference/configuration/yb-master#configuration-flags) as a set of key value pairs. Format (key=value,key=value). +: To specify any CSV value flags, enclose the values inside curly braces `{}`. Refer to [Pass additional flags to YB-Master and YB-TServer](#pass-additional-flags-to-yb-master-and-yb-tserver). -```sh -Usage: yugabyted version [flags] -``` +--tserver_flags *tserver_flags* +: Specify extra [tserver flags](../../../reference/configuration/yb-tserver#configuration-flags) as a set of key value pairs. Format (key=value,key=value). +: To specify any CSV value flags, enclose the values inside curly braces `{}`. Refer to [Pass additional flags to YB-Master and YB-TServer](#pass-additional-flags-to-yb-master-and-yb-tserver). -#### Flags +--ysql_enable_auth *bool* +: Enable or disable YSQL authentication. Default: `false`. +: If the `YSQL_PASSWORD` [environment variable](#environment-variables) exists, then authentication mode is automatically set to `true`. --h | --help -: Print the command-line help and exit. +--use_cassandra_authentication *bool* +: Enable or disable YCQL authentication. Default: `false`. +: If the `YCQL_USER` or `YCQL_PASSWORD` [environment variables](#environment-variables) exist, then authentication mode is automatically set to `true`. +: Note that the corresponding environment variables have higher priority than the command-line flags. ---config *config-file* -: The path to the configuration file of the yugabyted server whose version is desired. +--initial_scripts_dir *initial-scripts-dir* +: The directory from where yugabyted reads initialization scripts. +: Script format - YSQL `.sql`, YCQL `.cql`. +: Initialization scripts are executed in sorted name order. ---data_dir *data-directory* -: The data directory for the yugabyted server whose version is desired. +#### Deprecated flags ---base_dir *base-directory* -: The base directory for the yugabyted server whose version is desired. +--daemon *bool* +: Enable or disable running yugabyted in the background as a daemon. Does not persist on restart. Use [--background](#flags) instead. Default: `true`. ---log_dir *log-directory* -: The log directory for the yugabyted server whose version is desired. +--listen *bind-ip* +: The IP address or localhost name to which yugabyted will listen. ----- -### collect_logs +### status -Use the `yugabyted collect_logs` command to generate a zipped file with all logs. +Use the `yugabyted status` command to check the status. #### Syntax ```sh -Usage: yugabyted collect_logs [flags] +Usage: yugabyted status [flags] ``` #### Flags @@ -496,108 +650,66 @@ Usage: yugabyted collect_logs [flags] -h | --help : Print the command-line help and exit. ---stdout *stdout* -: Redirect the `logs.tar.gz` file's content to stdout. For example, `docker exec \ bin/yugabyted collect_logs --stdout > yugabyted.tar.gz` - ---config *config-file* -: The path to the configuration file of the yugabyted server whose logs are desired. - --data_dir *data-directory* -: The data directory for the yugabyted server whose logs are desired. +: The data directory for the yugabyted server whose status is desired. --base_dir *base-directory* -: The base directory for the yugabyted server whose logs are desired. +: The base directory for the yugabyted server whose status is desired. --log_dir *log-directory* -: The log directory for the yugabyted server whose logs are desired. +: The log directory for the yugabyted server whose status is desired. ----- -### connect +### stop -Use the `yugabyted connect` command to connect to the cluster using [ysqlsh](../../../admin/ysqlsh/) or [ycqlsh](../../../admin/ycqlsh). +Use the `yugabyted stop` command to stop a YugabyteDB cluster. #### Syntax ```sh -Usage: yugabyted connect [command] [flags] +Usage: yugabyted stop [flags] ``` -#### Commands - -The following subcommands are available for the `yugabyted connect` command: - -- [ysql](#ysql) -- [ycql](#ycql) - -#### ysql - -Use the `yugabyted connect ysql` subcommand to connect to YugabyteDB with [ysqlsh](../../../admin/ysqlsh/). - -#### ycql - -Use the `yugabyted connect ycql` subcommand to connect to YugabyteDB with [ycqlsh](../../../admin/ycqlsh). - #### Flags -h | --help : Print the command-line help and exit. ---config *config-file* -: The path to the configuration file of the yugabyted server to connect to. - --data_dir *data-directory* -: The data directory for the yugabyted server to connect to. +: The data directory for the yugabyted server that needs to be stopped. --base_dir *base-directory* -: The base directory for the yugabyted server to connect to. +: The base directory for the yugabyted server that needs to be stopped. --log_dir *log-directory* -: The log directory for the yugabyted server to connect to. +: The log directory for the yugabyted server that needs to be stopped. ----- -### demo +### version -Use the `yugabyted demo` command to use the demo [Northwind sample dataset](../../../sample-data/northwind/) with YugabyteDB. +Use the `yugabyted version` command to check the version number. #### Syntax ```sh -Usage: yugabyted demo [command] [flags] +Usage: yugabyted version [flags] ``` -#### Commands - -The following subcommands are available for the `yugabyted demo` command: - -- [connect](#connect-1) -- [destroy](#destroy-1) - -#### connect - -Use the `yugabyted demo connect` subcommand to load the [Northwind sample dataset](../../../sample-data/northwind/) into a new `yb_demo_northwind` SQL database, and then open the `ysqlsh` prompt for the same database. - -#### destroy - -Use the `yuagbyted demo destroy` subcommand to shut down the yugabyted single-node cluster and remove data, configuration, and log directories. This subcommand also deletes the `yb_demo_northwind` database. - #### Flags -h | --help -: Print the help message and exit. - ---config *config-file* -: The path to the configuration file of the yugabyted server to connect to or destroy. +: Print the command-line help and exit. --data_dir *data-directory* -: The data directory for the yugabyted server to connect to or destroy. +: The data directory for the yugabyted server whose version is desired. --base_dir *base-directory* -: The base directory for the yugabyted server to connect to or destroy. +: The base directory for the yugabyted server whose version is desired. --log_dir *log-directory* -: The log directory for the yugabyted server to connect to or destroy. +: The log directory for the yugabyted server whose version is desired. ----- @@ -708,7 +820,7 @@ The loopback addresses do not persist upon rebooting your computer. If you are running YugabyteDB on your local computer, you can't run more than one cluster at a time. To set up a new local YugabyteDB cluster using yugabyted, first destroy the currently running cluster. -To destroy a local single-node cluster, use the [destroy](#destroy) command as follows: +To destroy a local single-node cluster, use the [destroy](#destroy-1) command as follows: ```sh ./bin/yugabyted destroy @@ -716,11 +828,7 @@ To destroy a local single-node cluster, use the [destroy](#destroy) command as f To destroy a local multi-node cluster, use the `destroy` command with the `--base_dir` flag set to the base directory path of each of the nodes. For example, for a three node cluster, you would execute commands similar to the following: -```sh -./bin/yugabyted destroy --base_dir=/tmp/ybd1 -./bin/yugabyted destroy --base_dir=/tmp/ybd2 -./bin/yugabyted destroy --base_dir=/tmp/ybd3 -``` +{{%cluster/cmd op="destroy" nodes="1,2,3"%}} ```sh ./bin/yugabyted destroy --base_dir=$HOME/yugabyte-{{< yb-version version="v2.20" >}}/node1 @@ -958,7 +1066,7 @@ To create a secure multi-region cluster: To create a multi-region cluster: -1. Start the first node by running the `yugabyted start` command, pass in the `--cloud_location` and `--fault_tolerance` flags to set the node location details as follows: +1. Start the first node by running the `yugabyted start` command, pass in the `--cloud_location` and `--fault_tolerance` flags to set the node location details, as follows: ```sh ./bin/yugabyted start --advertise_address= \ @@ -1019,6 +1127,7 @@ You can run yugabyted in a Docker container. For more information, see the [Quic The following example shows how to create a multi-region cluster. If the `~/yb_docker_data` directory already exists, delete and re-create it. +Note that the `--join` flag only accepts labels that conform to DNS syntax, so name your Docker container accordingly using only letters, numbers, and hyphens. ```sh rm -rf ~/yb_docker_data @@ -1048,6 +1157,227 @@ docker run -d --name yugabytedb-node3 --net yb-network \ --base_dir=/home/yugabyte/yb_data --background=false ``` +### Create and manage read replica clusters + +To create a read replica cluster, you first create a YugabyteDB cluster; this example assumes a 3-node cluster is deployed. Refer to [Create a local multi-node cluster](#create-a-local-multi-node-cluster). + +You add read replica nodes to the primary cluster using the `--join` and `--read_replica` flags. + +#### Create a read replica cluster + +{{< tabpane text=true >}} + + {{% tab header="Secure" lang="secure-2" %}} + +To create a secure read replica cluster, generate and copy the certificates for each read replica node, similar to how you create [certificates for local multi-node cluster](#create-certificates-for-a-secure-local-multi-node-cluster). + +```sh +./bin/yugabyted cert generate_server_certs --hostnames=127.0.0.4,127.0.0.5,127.0.0.6,127.0.0.7,127.0.0.8 +``` + +Copy the certificates to the respective read replica nodes in the `/certs` directory: + +```sh +cp $HOME/var/generated_certs/127.0.0.4/* $HOME/yugabyte-{{< yb-version version="v2.20" >}}/node4/certs +cp $HOME/var/generated_certs/127.0.0.5/* $HOME/yugabyte-{{< yb-version version="v2.20" >}}/nod45/certs +cp $HOME/var/generated_certs/127.0.0.6/* $HOME/yugabyte-{{< yb-version version="v2.20" >}}/node6/certs +cp $HOME/var/generated_certs/127.0.0.7/* $HOME/yugabyte-{{< yb-version version="v2.20" >}}/node7/certs +cp $HOME/var/generated_certs/127.0.0.8/* $HOME/yugabyte-{{< yb-version version="v2.20" >}}/node8/certs +``` + +To create the read replica cluster, do the following: + +1. On macOS, configure loopback addresses for the additional nodes as follows: + + ```sh + sudo ifconfig lo0 alias 127.0.0.4 + sudo ifconfig lo0 alias 127.0.0.5 + sudo ifconfig lo0 alias 127.0.0.6 + sudo ifconfig lo0 alias 127.0.0.7 + sudo ifconfig lo0 alias 127.0.0.8 + ``` + +1. Add read replica nodes using the `--join` and `--read_replica` flags, as follows: + + ```sh + ./bin/yugabyted start \ + --secure \ + --advertise_address=127.0.0.4 \ + --join=127.0.0.1 \ + --base_dir=$HOME/yugabyte-{{< yb-version version="v2.20" >}}/node4 \ + --cloud_location=aws.us-east-1.us-east-1d \ + --read_replica + + ./bin/yugabyted start \ + --secure \ + --advertise_address=127.0.0.5 \ + --join=127.0.0.1 \ + --base_dir=$HOME/yugabyte-{{< yb-version version="v2.20" >}}/node5 \ + --cloud_location=aws.us-east-1.us-east-1d \ + --read_replica + + ./bin/yugabyted start \ + --secure \ + --advertise_address=127.0.0.6 \ + --join=127.0.0.1 \ + --base_dir=$HOME/yugabyte-{{< yb-version version="v2.20" >}}/node6 \ + --cloud_location=aws.us-east-1.us-east-1e \ + --read_replica + + ./bin/yugabyted start \ + --secure \ + --advertise_address=127.0.0.7 \ + --join=127.0.0.1 \ + --base_dir=$HOME/yugabyte-{{< yb-version version="v2.20" >}}/node7 \ + --cloud_location=aws.us-east-1.us-east-1f \ + --read_replica + + ./bin/yugabyted start \ + --secure \ + --advertise_address=127.0.0.8 \ + --join=127.0.0.1 \ + --base_dir=$HOME/yugabyte-{{< yb-version version="v2.20" >}}/node8 \ + --cloud_location=aws.us-east-1.us-east-1f \ + --read_replica + ``` + + {{% /tab %}} + + {{% tab header="Insecure" lang="basic-2" %}} + +To create the read replica cluster, do the following: + +1. On macOS, configure loopback addresses for the additional nodes as follows: + + ```sh + sudo ifconfig lo0 alias 127.0.0.4 + sudo ifconfig lo0 alias 127.0.0.5 + sudo ifconfig lo0 alias 127.0.0.6 + sudo ifconfig lo0 alias 127.0.0.7 + sudo ifconfig lo0 alias 127.0.0.8 + ``` + +1. Add read replica nodes using the `--join` and `--read_replica` flags, as follows: + + ```sh + ./bin/yugabyted start \ + --advertise_address=127.0.0.4 \ + --join=127.0.0.1 \ + --base_dir=$HOME/yugabyte-{{< yb-version version="v2.20" >}}/node4 \ + --cloud_location=aws.us-east-1.us-east-1d \ + --read_replica + + ./bin/yugabyted start \ + --advertise_address=127.0.0.5 \ + --join=127.0.0.1 \ + --base_dir=$HOME/yugabyte-{{< yb-version version="v2.20" >}}/node5 \ + --cloud_location=aws.us-east-1.us-east-1d \ + --read_replica + + ./bin/yugabyted start \ + --advertise_address=127.0.0.6 \ + --join=127.0.0.1 \ + --base_dir=$HOME/yugabyte-{{< yb-version version="v2.20" >}}/node6 \ + --cloud_location=aws.us-east-1.us-east-1e \ + --read_replica + + ./bin/yugabyted start \ + --advertise_address=127.0.0.7 \ + --join=127.0.0.1 \ + --base_dir=$HOME/yugabyte-{{< yb-version version="v2.20" >}}/node7 \ + --cloud_location=aws.us-east-1.us-east-1f \ + --read_replica + + ./bin/yugabyted start \ + --advertise_address=127.0.0.8 \ + --join=127.0.0.1 \ + --base_dir=$HOME/yugabyte-{{< yb-version version="v2.20" >}}/node8 \ + --cloud_location=aws.us-east-1.us-east-1f \ + --read_replica + ``` + + {{% /tab %}} + +{{< /tabpane >}} + +#### Configure a new read replica cluster + +After starting all read replica nodes, configure the read replica cluster using `configure_read_replica new` command as follows: + +```sh +./bin/yugabyted configure_read_replica new --base_dir ~/yb-cluster/node4 +``` + +The preceding command automatically determines the data placement constraint based on the `--cloud_location` of each node in the cluster. After the command is run, the primary cluster will begin asynchronous replication with the read replica cluster. + +You can set the data placement constraint manually and specify the number of replicas in each cloud location using the `--data_placement_constraint` flag, which takes the comma-separated value of `cloud.region.zone:num_of_replicas`. For example: + +```sh +./bin/yugabyted configure_read_replica new \ + --base_dir ~/yb-cluster/node4 \ + --constraint_value=aws.us-east-1.us-east-1d:1,aws.us-east-1.us-east-1e:1,aws.us-east-1.us-east-1d:1 +``` + +When specifying the `--data_placement_constraint` flag, you must provide the following: + +- include all the zones where a read replica node is to be placed. +- specify the number of replicas for each zone; each zone should have at least one read replica node. + + The number of replicas in any cloud location should be less than or equal to the number of read replica nodes deployed in that cloud location. + +The replication factor of the read replica cluster defaults to the number of different cloud locations containing read replica nodes; that is, one replica in each cloud location. + +You can set the replication factor manually using the `--rf` flag. For example: + +```sh +./bin/yugabyted configure_read_replica new \ + --base_dir ~/yb-cluster/node4 \ + --rf +``` + +When specifying the `--rf` flag: + +- If the `--data_placement_constraint` flag is provided + - All rules for using the `--data_placement_constraint` flag apply. + - Replication factor should be equal the number of replicas specified using the `--data_placement_constraint` flag. +- If the `--data_placement_constraint` flag is not provided: + - Replication factor should be less than or equal to total read replica nodes deployed. + - Replication factor should be greater than or equal to number of cloud locations that have a read replica node; that is, there should be at least one replica in each cloud location. + +#### Modifying a configured read replica cluster + +You can modify an existing read replica cluster configuration using the `configure_read_replica modify` command and specifying new values for the `--data_placement_constraint` and `--rf` flags. + +For example: + +```sh +./yugabyted configure_read_replica modify \ +--base_dir=~/yb-cluster/node4 \ +--data_placement_constraint=aws.us-east-1.us-east-1d:2,aws.us-east-1.us-east-1e:1,aws.us-east-1.us-east-1d:2 +``` + +This changes the data placement configuration of the read replica cluster to have 2 replicas in `aws.us-east-1.us-east-1d` cloud location as compared to one replica set in the original configuration. + +When specifying new `--data_placement_constraint` or `--rf` values, the same rules apply. + +#### Delete a read replica cluster + +To delete a read replica cluster, destroy all read replica nodes using the `destroy` command: + +```sh +./bin/yugabyted destroy --base_dir=$HOME/yugabyte-{{< yb-version version="v2.20" >}}/node4 +./bin/yugabyted destroy --base_dir=$HOME/yugabyte-{{< yb-version version="v2.20" >}}/node5 +./bin/yugabyted destroy --base_dir=$HOME/yugabyte-{{< yb-version version="v2.20" >}}/node6 +./bin/yugabyted destroy --base_dir=$HOME/yugabyte-{{< yb-version version="v2.20" >}}/node7 +./bin/yugabyted destroy --base_dir=$HOME/yugabyte-{{< yb-version version="v2.20" >}}/node8 +``` + +After destroying the nodes, run the `configure_read_replica delete` command to delete the read replica configuration: + +```sh +./bin/yugabyted configure_read_replica delete --base_dir=$HOME/yugabyte-{{< yb-version version="v2.20" >}}/node1 +``` + ### Enable and disable encryption at rest To enable [encryption at rest](../../../secure/encryption-at-rest/) in a deployed local cluster, run the following command: @@ -1094,13 +1424,17 @@ Upgrading an existing YugabyteDB cluster that was deployed using yugabyted inclu 1. Stop the running YugabyteDB node using the `yugabyted stop` command. + ```sh + ./bin/yugabyted stop --base_dir + ``` + 1. Start the new yugabyted process by executing the `yugabyted start` command. Use the previously configured `--base_dir` when restarting the instance. Repeat the steps on all the nodes of the cluster, one node at a time. ### Upgrade a cluster from single to multi zone -The following steps assume that you have a running YugabyteDB cluster deployed using `yugabyted`, and have downloaded the update: +The following steps assume that you have a running YugabyteDB cluster deployed using yugabyted, and have downloaded the update: 1. Stop the first node by using `yugabyted stop` command: