Skip to content

Commit

Permalink
fix spelling errors
Browse files Browse the repository at this point in the history
  • Loading branch information
hfxsd committed Dec 19, 2024
1 parent 65a5b18 commit 9b44054
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 12 deletions.
6 changes: 3 additions & 3 deletions tidb-cloud/changefeed-sink-to-apache-kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Currently, TiDB Cloud supports Private Connect for generic Kafka only. It does n
- Endpoint Service Name
- Bootstrap Ports

- If your Apache Kafka service is hosted on Google Cloud, follow [Set Up Self Hosted Kafka Private Service Connect in Google Cloud](/tidb-cloud/setup-self-hosted-kafka-psc.md) to ensure that the network connection is properly configured. After setup, provide the following information in the TiDB Cloud console to create the changefeedd:
- If your Apache Kafka service is hosted on Google Cloud, follow [Set Up Self Hosted Kafka Private Service Connect in Google Cloud](/tidb-cloud/setup-self-hosted-kafka-psc.md) to ensure that the network connection is properly configured. After setup, provide the following information in the TiDB Cloud console to create the changefeed:

- ID in Kafka Advertised Listener Pattern
- Service Attachment
Expand Down Expand Up @@ -108,13 +108,13 @@ For example, if your Kafka cluster is in Confluent Cloud, you can see [Resources
1. Authorize the TiDB Cloud AWS account to create an endpoint for your endpoint service. The TiDB Cloud AWS account ID is provided in the tip on the web page.
2. Make sure you select the same **Number of AZs** and **Suggested Kafka Endpoint Service AZs**, and fill the same unique ID in **Kafka Advertised Listener Pattern** when you [set up self hosted Kafka Private Link service in AWS](/tidb-cloud/setup-self-hosted-kafka-pls.md) in the **Network** section.
3. Fill the **Endpoint Service Name** which is configured in [Setup Self Hosted Kafka Private Link Service in AWS](/tidb-cloud/setup-self-hosted-kafka-pls.md).
4. Fill the **Boostrap Ports**. It is recommended that you set at least one port for one AZ. You can use commas `,` to separate multiple ports.
4. Fill the **Bootstrap Ports**. It is recommended that you set at least one port for one AZ. You can use commas `,` to separate multiple ports.

- If you select **Private Service Connect**, do the following:

1. Ensure that you fill the same unique ID in **Kafka Advertised Listener Pattern** when you [Setup Self Hosted Kafka Private Service Connect in Google Cloud](/tidb-cloud/setup-self-hosted-kafka-psc.md) in **Network** section.
2. Fill the **Service Attachment** that you have configured in [Setup Self Hosted Kafka Private Service Connect in Google Cloud](/tidb-cloud/setup-self-hosted-kafka-psc.md)
3. Fill the **Boostrap Ports**. It is recommended that you provide more than one ports. You can use commas `,` to separate multiple ports.
3. Fill the **Bootstrap Ports**. It is recommended that you provide more than one port. You can use commas `,` to separate multiple ports.

2. Select an **Authentication** option according to your Kafka authentication configuration.
- If your Kafka does not require authentication, keep the default option **Disable**.
Expand Down
18 changes: 9 additions & 9 deletions tidb-cloud/setup-self-hosted-kafka-psc.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,18 +33,18 @@ The document provides an example of connecting to a Kafka Private Link service d
- Manage Private Service Connect
- Connect to VM Nodes to configure Kafka nodes

2. Create a [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) cluster in Google Cloud first. Ensure that Kafka deployment information is alligned with your TiDB cluster.
2. Create a [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) cluster in Google Cloud first. Ensure that Kafka deployment information is aligned with your TiDB cluster.

1. In the [TiDB Cloud console](https://tidbcloud.com), navigate to the cluster overview page of the TiDB cluster, and then click **Changefeed** in the left navigation pane.
2. On the overview page, find the region of TiDB cluster. Ensure that your Kafka cluster will be deployed to the same region.
2. On the overview page, find the region of the TiDB cluster. Ensure that your Kafka cluster will be deployed to the same region.
3. Click **Create Changefeed**.
1. In **Target Type**, select **Kafka**.
2. In **Connectivity Method**, select **Private Service Connect**.
4. Note down the Google Cloud project in **Reminders before proceeding**. You will use it to authorize the auto-accept endpoint creation request from TiDB Cloud.
5. Note down the **Zones of TiDB Cluster**. You will deploy your TiDB cluster in these zones. It is recommended that you deploy Kafka in these zones to reduce cross-zone traffic.
6. Pick a unique **Kafka Advertised Listener Pattern** for your Kafka Private Service Connect service.
1. Input a unique random string. It can only include numbers or lowercase letters. You will use it to generate **Kafka Advertised Listener Pattern** later.
2. Click **Check usage and generate** to check if the random string is unique and generate **Kafka Advertised Listener Pattern** that will be used to assemble the EXTERNAL advertised listener for kafka brokers, or configure Kafka-proxy.
2. Click **Check usage and generate** to check if the random string is unique and generate **Kafka Advertised Listener Pattern** that will be used to assemble the EXTERNAL advertised listener for Kafka brokers, or configure Kafka-proxy.

Note down all the deployment information. You need to use it to configure your Kafka Private Service Connect service later.

Expand Down Expand Up @@ -75,7 +75,7 @@ Expose every Kafka broker to TiDB Cloud VPC with a unique port by using the PSC

You need to create two subnets for Kafka VPC, one for Kafka brokers, and the other for bastion node to make it easy to configure the Kafka cluster.

Go to the [Google Cloud console](https://cloud.google.com/cloud-console), and navigate to the [VPC networks](https://console.cloud.google.com/networking/networks/list) page to create the Kafka VPC with following attributes:
Go to the [Google Cloud console](https://cloud.google.com/cloud-console), and navigate to the [VPC networks](https://console.cloud.google.com/networking/networks/list) page to create the Kafka VPC with the following attributes:

- **Name**: `kafka-vpc`
- Subnets
Expand Down Expand Up @@ -149,7 +149,7 @@ Go to the [VM instances](https://console.cloud.google.com/compute/instances) pag

```shell
# Run this command to authorize gcloud to access the Cloud Platform with Google user credentials
# Please following the instruction in output to finish the login
# Follow the instruction in output to finish the login
gcloud auth login
# Copy binaries to broker nodes
Expand All @@ -170,12 +170,12 @@ Go to the [VM instances](https://console.cloud.google.com/compute/instances) pag
2. Configure two **broker** listeners. INTERNAL for internal access; EXTERNAL for external access from TiDB Cloud.

2. For `advertised.listeners`, do the following:
1. Configure an INTERNAL advertised listener for every broker with internal ip of broker node, advertise internal Kafka clients use this address to visit the broker.
1. Configure an INTERNAL advertised listener for every broker with internal IP of broker node, advertise internal Kafka clients use this address to visit the broker.
2. Configure an EXTERNAL advertised listener based on **Kafka Advertised Listener Pattern** you get from TiDB Cloud for every broker node to help TiDB Cloud differentiate between different brokers. Different EXTERNAL advertised listeners help Kafka clients from TiDB Cloud side route requests to the right broker.
- `<port>` differentiates brokers from Kafka Private Service Connect access point. Plan a port range for EXTERNAL advertised listeners of all brokers. These ports do not have to be actual ports listened on brokers. They are ports listened on the load balancer for Private Service Connect that will forward requests to different brokers.
- It is recommended to configure different broker IDs for different brokers to make it easy for troubleshooting.

3. The planing values:
3. The planning values:
- CONTROLLER port: `29092`
- INTERNAL port: `9092`
- EXTERNAL: `39092`
Expand Down Expand Up @@ -654,7 +654,7 @@ Assume that you already have a Kafka cluster running in the same region as the T
- **Name**: `proxy-psc-subnet`
- **VPC Network**: your network
- **Region**: `us-west1`
- **IPv4 range**: set the CIDR based on your network planing
- **IPv4 range**: set the CIDR based on your network planning
- **Accepted projects**: the Google Cloud project of TiDB Cloud you get in [Prerequisites](#prerequisites), for example `tidbcloud-prod-000`.
3. Navigate to the detail page of the **kafka-proxy-psc**. Note down the `Service attachment`, for example `projects/tidbcloud-dp-stg-000/regions/us-west1/serviceAttachments/kafka-proxy-psc`, which will be used in TiDB Cloud to connect to this PSC.
Expand All @@ -675,7 +675,7 @@ Assume that you already have a Kafka cluster running in the same region as the T
2. After you proceed to the **Configure the changefeed target** > **Connectivity Method** > **Private Service Connect**, fill in the following fields with corresponding values and others fields as needed.
- **Kafka Advertised Listener Pattern**: `abc`. The same as the unique random string you use to generate **Kafka Advertised Listener Pattern** in [Prerequistes](#prerequisites).
- **Kafka Advertised Listener Pattern**: `abc`. The same as the unique random string you use to generate **Kafka Advertised Listener Pattern** in [Prerequisites](#prerequisites).
- **Service Attachment**: the kafka-proxy service attachment of PSC, for example `projects/tidbcloud-dp-stg-000/regions/us-west1/serviceAttachments/kafka-proxy-psc`.
- **Bootstrap Ports**: `9092,9093,9094`
Expand Down

0 comments on commit 9b44054

Please sign in to comment.