Skip to content

Commit

Permalink
ticdc: fix typos (pingcap#11147)
Browse files Browse the repository at this point in the history
  • Loading branch information
shichun-0415 authored Nov 4, 2022
1 parent 87b3ba9 commit 962bb98
Show file tree
Hide file tree
Showing 4 changed files with 25 additions and 23 deletions.
30 changes: 15 additions & 15 deletions ticdc/manage-ticdc.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ tiup cluster upgrade <cluster-name> v6.3.0

This section introduces how to modify the configuration of TiCDC cluster using the [`tiup cluster edit-config`](/tiup/tiup-component-cluster-edit-config.md) command of TiUP. The following example changes the value of `gc-ttl` from the default `86400` to `3600`, namely, one hour.

First, execute the following command. You need to replace `<cluster-name>` with your actual cluster name.
First, run the following command. You need to replace `<cluster-name>` with your actual cluster name.

{{< copyable "shell-regular" >}}

Expand All @@ -54,7 +54,7 @@ Then, enter the vi editor page and modify the `cdc` configuraion under [`server-
gc-ttl: 3600
```

After the modification, execute the `tiup cluster reload -R cdc` command to reload the configuration.
After the modification, run the `tiup cluster reload -R cdc` command to reload the configuration.

## Use TLS

Expand Down Expand Up @@ -118,18 +118,18 @@ The states in the above state transfer diagram are described as follows:

The numbers in the above state transfer diagram are described as follows.

- ① Execute the `changefeed pause` command
- ② Execute the `changefeed resume` command to resume the replication task
- ① Run the `changefeed pause` command.
- ② Run the `changefeed resume` command to resume the replication task.
- ③ Recoverable errors occur during the `changefeed` operation, and the operation is resumed automatically.
- ④ Execute the `changefeed resume` command to resume the replication task
- ⑤ Recoverable errors occur during the `changefeed` operation
- ④ Run the `changefeed resume` command to resume the replication task.
- ⑤ Recoverable errors occur during the `changefeed` operation.
- ⑥ `changefeed` has reached the preset `TargetTs`, and the replication is automatically stopped.
- ⑦ `changefeed` suspended longer than the duration specified by `gc-ttl`, and cannot be resumed.
- ⑧ `changefeed` experienced an unrecoverable error when trying to execute automatic recovery.

#### Create a replication task

Execute the following commands to create a replication task:
Run the following commands to create a replication task:

```shell
cdc cli changefeed create --server=http://10.0.10.25:8300 --sink-uri="mysql://root:[email protected]:3306/" --changefeed-id="simple-replication-task" --sort-engine="unified"
Expand Down Expand Up @@ -324,7 +324,7 @@ In the command above, `changefeed.toml` is the configuration file for the replic
#### Query the replication task list
Execute the following command to query the replication task list:
Run the following command to query the replication task list:
{{< copyable "shell-regular" >}}
Expand All @@ -349,12 +349,12 @@ cdc cli changefeed list --server=http://10.0.10.25:8300
- `normal`: The replication task runs normally.
- `stopped`: The replication task is stopped (manually paused).
- `error`: The replication task is stopped (by an error).
- `removed`: The replication task is removed. Tasks of this state are displayed only when you have specified the `--all` option. To see these tasks when this option is not specified, execute the `changefeed query` command.
- `finished`: The replication task is finished (data is replicated to the `target-ts`). Tasks of this state are displayed only when you have specified the `--all` option. To see these tasks when this option is not specified, execute the `changefeed query` command.
- `removed`: The replication task is removed. Tasks of this state are displayed only when you have specified the `--all` option. To see these tasks when this option is not specified, run the `changefeed query` command.
- `finished`: The replication task is finished (data is replicated to the `target-ts`). Tasks of this state are displayed only when you have specified the `--all` option. To see these tasks when this option is not specified, run the `changefeed query` command.
#### Query a specific replication task
To query a specific replication task, execute the `changefeed query` command. The query result includes the task information and the task state. You can specify the `--simple` or `-s` argument to simplify the query result that will only include the basic replication state and the checkpoint information. If you do not specify this argument, detailed task configuration, replication states, and replication table information are output.
To query a specific replication task, run the `changefeed query` command. The query result includes the task information and the task state. You can specify the `--simple` or `-s` argument to simplify the query result that will only include the basic replication state and the checkpoint information. If you do not specify this argument, detailed task configuration, replication states, and replication table information are output.
```shell
cdc cli changefeed query -s --server=http://10.0.10.25:8300 --changefeed-id=simple-replication-task
Expand Down Expand Up @@ -454,7 +454,7 @@ In the command and result above:
#### Pause a replication task
Execute the following command to pause a replication task:
Run the following command to pause a replication task:
```shell
cdc cli changefeed pause --server=http://10.0.10.25:8300 --changefeed-id simple-replication-task
Expand All @@ -466,7 +466,7 @@ In the above command:
#### Resume a replication task
Execute the following command to resume a paused replication task:
Run the following command to resume a paused replication task:
```shell
cdc cli changefeed resume --server=http://10.0.10.25:8300 --changefeed-id simple-replication-task
Expand All @@ -483,7 +483,7 @@ cdc cli changefeed resume --server=http://10.0.10.25:8300 --changefeed-id simple
#### Remove a replication task
Execute the following command to remove a replication task:
Run the following command to remove a replication task:
{{< copyable "shell-regular" >}}
Expand Down Expand Up @@ -814,7 +814,7 @@ Unified sorter is the sorting engine in TiCDC. It can mitigate OOM problems caus
For the changefeeds created using `cdc cli` after v4.0.13, Unified Sorter is enabled by default; for the changefeeds that have existed before v4.0.13, the previous configuration is used.
To check whether or not the Unified Sorter feature is enabled on a changefeed, you can execute the following example command (assuming the IP address of the PD instance is `http://10.0.10.25:2379`):
To check whether or not the Unified Sorter feature is enabled on a changefeed, you can run the following example command (assuming the IP address of the PD instance is `http://10.0.10.25:2379`):
```shell
cdc cli --server="http://10.0.10.25:8300" changefeed query --changefeed-id=simple-replication-task | grep 'sort-engine'
Expand Down
2 changes: 1 addition & 1 deletion ticdc/ticdc-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,7 @@ From the result, you can see that the table schema before and after the replicat

Since v5.0.1 or v4.0.13, for each replication to MySQL, TiCDC automatically sets `explicit_defaults_for_timestamp = ON` to ensure that the time type is consistent between the upstream and downstream. For versions earlier than v5.0.1 or v4.0.13, pay attention to the compatibility issue caused by the inconsistent `explicit_defaults_for_timestamp` value when using TiCDC to replicate the time type data.

## `enable-old-value` is set to `true` when I create a TiCDC replication task, but `INSERT`/`UPDATE` statements from the upstream become `REPLACE INTO` after being replicated to the downstream
## Why do `INSERT`/`UPDATE` statements from the upstream become `REPLACE INTO` after being replicated to the downstream if I set `enable-old-value` to `true` when I create a TiCDC replication task?

When a changefeed is created in TiCDC, the `safe-mode` setting defaults to `true`, which generates the `REPLACE INTO` statement to execute for the upstream `INSERT`/`UPDATE` statements.

Expand Down
6 changes: 4 additions & 2 deletions ticdc/ticdc-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ Currently, the following scenarios are not supported:
- The TiKV cluster that uses RawKV alone.
- The [DDL operation `CREATE SEQUENCE`](/sql-statements/sql-statement-create-sequence.md) and the [SEQUENCE function](/sql-statements/sql-statement-create-sequence.md#sequence-function) in TiDB. When the upstream TiDB uses `SEQUENCE`, TiCDC ignores `SEQUENCE` DDL operations/functions performed upstream. However, DML operations using `SEQUENCE` functions can be correctly replicated.

TiCDC only provides partial support for scenarios of large transactions in the upstream. For details, refer to [FAQ: Does TiCDC support replicating large transactions? Is there any risk?](/ticdc/ticdc-faq.md#does-ticdc-support-replicating-large-transactions-is-there-any-risk).
TiCDC only provides partial support for scenarios of large transactions in the upstream. For details, refer to [Does TiCDC support replicating large transactions? Is there any risk?](/ticdc/ticdc-faq.md#does-ticdc-support-replicating-large-transactions-is-there-any-risk).

> **Note:**
>
Expand Down Expand Up @@ -127,7 +127,9 @@ When using the `cdc cli` tool of TiCDC v5.0.0-rc to operate a v4.0.x TiCDC clust

- If the TiCDC cluster is v4.0.9 or a later version, using the v5.0.0-rc `cdc cli` tool to create a replication task will cause the old value and unified sorter features to be unexpectedly enabled by default.

Solutions: Use the `cdc` executable file corresponding to the TiCDC cluster version to perform the following operations:
Solutions:

Use the `cdc` executable file corresponding to the TiCDC cluster version to perform the following operations:

1. Delete the changefeed created using the v5.0.0-rc `cdc cli` tool. For example, run the `tiup cdc:v4.0.9 cli changefeed remove -c xxxx --pd=xxxxx --force` command.
2. If the replication task is stuck, restart the TiCDC cluster. For example, run the `tiup cluster restart <cluster_name> -R cdc` command.
Expand Down
10 changes: 5 additions & 5 deletions ticdc/troubleshoot-ticdc.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,19 +129,19 @@ If the downstream is a special MySQL environment (a public cloud RDS or some MyS

Refer to [Notes for compatibility](/ticdc/manage-ticdc.md#notes-for-compatibility).

## The `start-ts` timestamp of the TiCDC task is quite different from the current time. During the execution of this task, replication is interrupted and an error `[CDC:ErrBufferReachLimit]` occurs
## The `start-ts` timestamp of the TiCDC task is quite different from the current time. During the execution of this task, replication is interrupted and an error `[CDC:ErrBufferReachLimit]` occurs. What should I do?

Since v4.0.9, you can try to enable the unified sorter feature in your replication task, or use the BR tool for an incremental backup and restore, and then start the TiCDC replication task from a new time.

## When the downstream of a changefeed is a database similar to MySQL and TiCDC executes a time-consuming DDL statement, all other changefeeds are blocked. How should I handle the issue?
## When the downstream of a changefeed is a database similar to MySQL and TiCDC executes a time-consuming DDL statement, all other changefeeds are blocked. What should I do?

1. Pause the execution of the changefeed that contains the time-consuming DDL statement. Then you can see that other changefeeds are no longer blocked.
2. Search for the `apply job` field in the TiCDC log and confirm the `start-ts` of the time-consuming DDL statement.
3. Manually execute the DDL statement in the downstream. After the execution finishes, go on performing the following operations.
4. Modify the changefeed configuration and add the above `start-ts` to the `ignore-txn-start-ts` configuration item.
5. Resume the paused changefeed.

## After I upgrade the TiCDC cluster to v4.0.8, the `[CDC:ErrKafkaInvalidConfig]Canal requires old value to be enabled` error is reported when I execute a changefeed
## After I upgrade the TiCDC cluster to v4.0.8, the `[CDC:ErrKafkaInvalidConfig]Canal requires old value to be enabled` error is reported when I execute a changefeed. What should I do?

Since v4.0.8, if the `canal-json`, `canal` or `maxwell` protocol is used for output in a changefeed, TiCDC enables the old value feature automatically. However, if you have upgraded TiCDC from an earlier version to v4.0.8 or later, when the changefeed uses the `canal-json`, `canal` or `maxwell` protocol and the old value feature is disabled, this error is reported.

Expand Down Expand Up @@ -172,7 +172,7 @@ To fix the error, take the following steps:
cdc cli changefeed resume -c test-cf --pd=http://10.0.10.25:2379
```

## The `[tikv:9006]GC life time is shorter than transaction duration, transaction starts at xx, GC safe point is yy` error is reported when I use TiCDC to create a changefeed
## The `[tikv:9006]GC life time is shorter than transaction duration, transaction starts at xx, GC safe point is yy` error is reported when I use TiCDC to create a changefeed. What should I do?

You need to run the `pd-ctl service-gc-safepoint --pd <pd-addrs>` command to query the current GC safepoint and service GC safepoint. If the GC safepoint is smaller than the `start-ts` of the TiCDC replication task (changefeed), you can directly add the `--disable-gc-check` option to the `cdc cli create changefeed` command to create a changefeed.

Expand All @@ -181,7 +181,7 @@ If the result of `pd-ctl service-gc-safepoint --pd <pd-addrs>` does not have `gc
- If your PD version is v4.0.8 or earlier, refer to [PD issue #3128](https://github.com/tikv/pd/issues/3128) for details.
- If your PD is upgraded from v4.0.8 or an earlier version to a later version, refer to [PD issue #3366](https://github.com/tikv/pd/issues/3366) for details.

## When I use TiCDC to replicate messages to Kafka, Kafka returns the `Message was too large` error
## When I use TiCDC to replicate messages to Kafka, Kafka returns the `Message was too large` error. Why?

For TiCDC v4.0.8 or earlier versions, you cannot effectively control the size of the message output to Kafka only by configuring the `max-message-bytes` setting for Kafka in the Sink URI. To control the message size, you also need to increase the limit on the bytes of messages to be received by Kafka. To add such a limit, add the following configuration to the Kafka server configuration.

Expand Down

0 comments on commit 962bb98

Please sign in to comment.