diff --git a/docs/en/sql-reference/sql-statements/data-manipulation/BROKER_LOAD.md b/docs/en/sql-reference/sql-statements/data-manipulation/BROKER_LOAD.md index df940fe417179..bb1e897050192 100644 --- a/docs/en/sql-reference/sql-statements/data-manipulation/BROKER_LOAD.md +++ b/docs/en/sql-reference/sql-statements/data-manipulation/BROKER_LOAD.md @@ -205,7 +205,7 @@ Open-source HDFS supports two authentication methods: simple authentication and | Parameter | Description | | ------------------------------- | ------------------------------------------------------------ | | hadoop.security.authentication | The authentication method. Valid values: `simple` and `kerberos`. Default value: `simple`. `simple` represents simple authentication, meaning no authentication, and `kerberos` represents Kerberos authentication. | - | kerberos_principal | The Kerberos principal to be authenticated. Each principal consists of the following three parts to ensure that it is unique across the HDFS cluster: | + | kerberos_principal | The Kerberos principal to be authenticated. Each principal consists of the following three parts to ensure that it is unique across the HDFS cluster:Example: `nn/zelda1@ZELDA.COM`. | | kerberos_keytab | The save path of the Kerberos keytab file. | | kerberos_keytab_content | The Base64-encoded content of the the Kerberos keytab file. You can choose to specify either `kerberos_keytab` or `kerberos_keytab_content`. | @@ -224,11 +224,11 @@ Open-source HDFS supports two authentication methods: simple authentication and You can configure an HA mechanism for the NameNode of the HDFS cluster. This way, if the NameNode is switched over to another node, StarRocks can automatically identify the new node that serves as the NameNode. This includes the following scenarios: - - If you load data from a single HDFS cluster that has one Kerberos user configured, both load-based loading and load-free loading are supported. - - - To perform load-based loading, make sure that at least one independent [broker group](../../../deployment/deploy_broker.md) is deployed, and place the `hdfs-site.xml` file to the `{deploy}/conf` path on the broker node that serves the HDFS cluster. StarRocks will add the `{deploy}/conf` path to the environment variable `CLASSPATH` upon broker startup, allowing the brokers to read information about the HDFS cluster nodes. + - If you load data from a single HDFS cluster that has one Kerberos user configured, both broker-based loading and broker-free loading are supported. + + - To perform broker-based loading, make sure that at least one independent [broker group](../../../deployment/deploy_broker.md) is deployed, and place the `hdfs-site.xml` file to the `{deploy}/conf` path on the broker node that serves the HDFS cluster. StarRocks will add the `{deploy}/conf` path to the environment variable `CLASSPATH` upon broker startup, allowing the brokers to read information about the HDFS cluster nodes. - - To perform load-free loading, place the `hdfs-site.xml` file to the `{deploy}/conf` paths of each FE node and each BE node. + - To perform broker-free loading, place the `hdfs-site.xml` file to the `{deploy}/conf` paths of each FE node and each BE node. - If you load data from a single HDFS cluster that has multiple Kerberos users configured, only broker-based loading is supported. Make sure that at least one independent [broker group](../../../deployment/deploy_broker.md) is deployed, and place the `hdfs-site.xml` file to the `{deploy}/conf` path on the broker node that serves the HDFS cluster. StarRocks will add the `{deploy}/conf` path to the environment variable `CLASSPATH` upon broker startup, allowing the brokers to read information about the HDFS cluster nodes.