From ec64dd92c405003b3c05f72da3e8959662cee862 Mon Sep 17 00:00:00 2001 From: xixirangrang <35301108+hfxsd@users.noreply.github.com> Date: Thu, 21 Dec 2023 17:13:58 +0800 Subject: [PATCH 01/14] Update mysql-schema.md --- mysql-schema.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/mysql-schema.md b/mysql-schema.md index f7f8368c6137b..c12ab4a318756 100644 --- a/mysql-schema.md +++ b/mysql-schema.md @@ -88,6 +88,20 @@ Currently, the `help_topic` is NULL. * `tidb_mdl_view`:a view of metadata locks. You can use it to view information about the currently blocked DDL statements * `tidb_mdl_info`:used internally by TiDB to synchronize metadata locks across nodes +## System tables related to DDL statements + +* `tidb_ddl_history`: the history records of DDL statements +* `tidb_ddl_jobs`: the metadata of DDL statements that are currently being executed by TiDB +* `tidb_ddl_reorg`: the metadata of physical DDL statements (such as adding indexes) that are currently being executed by TiDB + +## System tables related to TiDB Distributed eXecution Framework (DXF) + +* `dist_framework_meta`: the metadata of the Distributed eXecution Framework (DXF) task scheduler +* `tidb_global_task`: the metadata of the current DXF task +* `tidb_global_task_history`: the metadata of the historical DXF task +* `tidb_background_subtask`: the metadata of the current DXF subtask +* `tidb_background_subtask_history`: the metadata of the historical DXF subtasks + ## Miscellaneous system tables From 34e85f90d60bd7913219d0a957b9e02bd68fedce Mon Sep 17 00:00:00 2001 From: xixirangrang <35301108+hfxsd@users.noreply.github.com> Date: Thu, 21 Dec 2023 17:17:58 +0800 Subject: [PATCH 02/14] Update mysql-schema.md --- mysql-schema.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mysql-schema.md b/mysql-schema.md index c12ab4a318756..64d709398f245 100644 --- a/mysql-schema.md +++ b/mysql-schema.md @@ -98,7 +98,7 @@ Currently, the `help_topic` is NULL. * `dist_framework_meta`: the metadata of the Distributed eXecution Framework (DXF) task scheduler * `tidb_global_task`: the metadata of the current DXF task -* `tidb_global_task_history`: the metadata of the historical DXF task +* `tidb_global_task_history`: the metadata of the historical DXF tasks * `tidb_background_subtask`: the metadata of the current DXF subtask * `tidb_background_subtask_history`: the metadata of the historical DXF subtasks From b89186a16adda548194c06049e9c5b2918ba6d38 Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Thu, 21 Dec 2023 17:36:09 +0800 Subject: [PATCH 03/14] Update mysql-schema.md Co-authored-by: Aolin --- mysql-schema.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mysql-schema.md b/mysql-schema.md index 64d709398f245..184c53e68b073 100644 --- a/mysql-schema.md +++ b/mysql-schema.md @@ -85,8 +85,8 @@ Currently, the `help_topic` is NULL. ## System tables related to metadata locks -* `tidb_mdl_view`:a view of metadata locks. You can use it to view information about the currently blocked DDL statements -* `tidb_mdl_info`:used internally by TiDB to synchronize metadata locks across nodes +* `tidb_mdl_view`: a view of metadata locks. You can use it to view information about the currently blocked DDL statements +* `tidb_mdl_info`: used internally by TiDB to synchronize metadata locks across nodes ## System tables related to DDL statements From 925d074b63c23c262fb2c8caaf0d246b363dfe6b Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Thu, 21 Dec 2023 17:39:45 +0800 Subject: [PATCH 04/14] Update mysql-schema.md --- mysql-schema.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mysql-schema.md b/mysql-schema.md index 184c53e68b073..588e102e31b8c 100644 --- a/mysql-schema.md +++ b/mysql-schema.md @@ -98,7 +98,7 @@ Currently, the `help_topic` is NULL. * `dist_framework_meta`: the metadata of the Distributed eXecution Framework (DXF) task scheduler * `tidb_global_task`: the metadata of the current DXF task -* `tidb_global_task_history`: the metadata of the historical DXF tasks +* `tidb_global_task_history`: the metadata of the historical DXF tasks, including both succeeded and failed tasks * `tidb_background_subtask`: the metadata of the current DXF subtask * `tidb_background_subtask_history`: the metadata of the historical DXF subtasks From c8f009725aa8daddc0549c3666c30d6cd6914709 Mon Sep 17 00:00:00 2001 From: xixirangrang <35301108+hfxsd@users.noreply.github.com> Date: Fri, 22 Dec 2023 10:50:34 +0800 Subject: [PATCH 05/14] Update tiup-cluster-topology-reference.md --- tiup/tiup-cluster-topology-reference.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/tiup/tiup-cluster-topology-reference.md b/tiup/tiup-cluster-topology-reference.md index 62aecac97d245..572d81584252c 100644 --- a/tiup/tiup-cluster-topology-reference.md +++ b/tiup/tiup-cluster-topology-reference.md @@ -8,6 +8,8 @@ To deploy or scale TiDB using TiUP, you need to provide a topology file ([sample Similarly, to modify the cluster topology, you need to modify the topology file. The difference is that, after the cluster is deployed, you can only modify a part of the fields in the topology file. This document introduces each section of the topology file and each field in each section. +When you deploy a TiDB cluster using TiUP, TiUP also deploys monitoring servers, such as Prometheus, Grafana, and Alertmanager. In the meantime, if you scale out this cluster, TiUP also adds the new nodes into monitoring scope. To customize the configurations of the monitoring servers mentioned above, you can follow the instructions in [Customize Configurations of Monitoring Servers](/tiup/customized-montior-in-tiup-environment.md). + ## File structure A topology configuration file for TiDB deployment using TiUP might contain the following sections: From 391aca515dfb1c54c8fb16f68aa0fc4834865eed Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Fri, 22 Dec 2023 10:59:11 +0800 Subject: [PATCH 06/14] Update mysql-schema.md --- mysql-schema.md | 18 ++---------------- 1 file changed, 2 insertions(+), 16 deletions(-) diff --git a/mysql-schema.md b/mysql-schema.md index 588e102e31b8c..f7f8368c6137b 100644 --- a/mysql-schema.md +++ b/mysql-schema.md @@ -85,22 +85,8 @@ Currently, the `help_topic` is NULL. ## System tables related to metadata locks -* `tidb_mdl_view`: a view of metadata locks. You can use it to view information about the currently blocked DDL statements -* `tidb_mdl_info`: used internally by TiDB to synchronize metadata locks across nodes - -## System tables related to DDL statements - -* `tidb_ddl_history`: the history records of DDL statements -* `tidb_ddl_jobs`: the metadata of DDL statements that are currently being executed by TiDB -* `tidb_ddl_reorg`: the metadata of physical DDL statements (such as adding indexes) that are currently being executed by TiDB - -## System tables related to TiDB Distributed eXecution Framework (DXF) - -* `dist_framework_meta`: the metadata of the Distributed eXecution Framework (DXF) task scheduler -* `tidb_global_task`: the metadata of the current DXF task -* `tidb_global_task_history`: the metadata of the historical DXF tasks, including both succeeded and failed tasks -* `tidb_background_subtask`: the metadata of the current DXF subtask -* `tidb_background_subtask_history`: the metadata of the historical DXF subtasks +* `tidb_mdl_view`:a view of metadata locks. You can use it to view information about the currently blocked DDL statements +* `tidb_mdl_info`:used internally by TiDB to synchronize metadata locks across nodes ## Miscellaneous system tables From dbbc009df6da7062800397ea08f19b5eea7cc7ed Mon Sep 17 00:00:00 2001 From: xixirangrang <35301108+hfxsd@users.noreply.github.com> Date: Fri, 22 Dec 2023 11:02:40 +0800 Subject: [PATCH 07/14] Update customized-montior-in-tiup-environment.md --- tiup/customized-montior-in-tiup-environment.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/tiup/customized-montior-in-tiup-environment.md b/tiup/customized-montior-in-tiup-environment.md index c2f90850f9412..e99878e34442e 100644 --- a/tiup/customized-montior-in-tiup-environment.md +++ b/tiup/customized-montior-in-tiup-environment.md @@ -7,6 +7,8 @@ summary: Learn how to customize the configurations of monitoring servers managed When you deploy a TiDB cluster using TiUP, TiUP also deploys monitoring servers, such as Prometheus, Grafana, and Alertmanager. In the meantime, if you scale out this cluster, TiUP also adds the new nodes into monitoring scope. +Noticeably, TiUP overwrites the configurations of the monitoring servers by using its configurations. That means, after you modify the configuration files of the monitoring servers, you might find that your modifications do not take effect because these modifications are overwritten by later TiUP operations such as deployment, scaling out, scaling in, and reloading. + To customize the configurations of the monitoring servers mentioned above, you can follow the instructions below to add related configuration items in the topology.yaml of the TiDB cluster. > **Note:** From bb362ec93ea5daa94c88566dff5fb24aca695413 Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Mon, 25 Dec 2023 17:26:40 +0800 Subject: [PATCH 08/14] Update tiup/customized-montior-in-tiup-environment.md --- tiup/customized-montior-in-tiup-environment.md | 1 - 1 file changed, 1 deletion(-) diff --git a/tiup/customized-montior-in-tiup-environment.md b/tiup/customized-montior-in-tiup-environment.md index e99878e34442e..ae09fbda6e8b0 100644 --- a/tiup/customized-montior-in-tiup-environment.md +++ b/tiup/customized-montior-in-tiup-environment.md @@ -7,7 +7,6 @@ summary: Learn how to customize the configurations of monitoring servers managed When you deploy a TiDB cluster using TiUP, TiUP also deploys monitoring servers, such as Prometheus, Grafana, and Alertmanager. In the meantime, if you scale out this cluster, TiUP also adds the new nodes into monitoring scope. -Noticeably, TiUP overwrites the configurations of the monitoring servers by using its configurations. That means, after you modify the configuration files of the monitoring servers, you might find that your modifications do not take effect because these modifications are overwritten by later TiUP operations such as deployment, scaling out, scaling in, and reloading. To customize the configurations of the monitoring servers mentioned above, you can follow the instructions below to add related configuration items in the topology.yaml of the TiDB cluster. From f3ddc2b4f752c56853889cb7ea91816ebbb72806 Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Mon, 25 Dec 2023 17:26:53 +0800 Subject: [PATCH 09/14] Update tiup/customized-montior-in-tiup-environment.md --- tiup/customized-montior-in-tiup-environment.md | 1 - 1 file changed, 1 deletion(-) diff --git a/tiup/customized-montior-in-tiup-environment.md b/tiup/customized-montior-in-tiup-environment.md index ae09fbda6e8b0..c2f90850f9412 100644 --- a/tiup/customized-montior-in-tiup-environment.md +++ b/tiup/customized-montior-in-tiup-environment.md @@ -7,7 +7,6 @@ summary: Learn how to customize the configurations of monitoring servers managed When you deploy a TiDB cluster using TiUP, TiUP also deploys monitoring servers, such as Prometheus, Grafana, and Alertmanager. In the meantime, if you scale out this cluster, TiUP also adds the new nodes into monitoring scope. - To customize the configurations of the monitoring servers mentioned above, you can follow the instructions below to add related configuration items in the topology.yaml of the TiDB cluster. > **Note:** From 6fbe2c3609f0b194347107b075e3fd57ee7c77c6 Mon Sep 17 00:00:00 2001 From: xixirangrang <35301108+hfxsd@users.noreply.github.com> Date: Wed, 3 Jan 2024 15:22:51 +0800 Subject: [PATCH 10/14] Update tiup-cluster-topology-reference.md --- tiup/tiup-cluster-topology-reference.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/tiup/tiup-cluster-topology-reference.md b/tiup/tiup-cluster-topology-reference.md index 572d81584252c..b6be6e774e278 100644 --- a/tiup/tiup-cluster-topology-reference.md +++ b/tiup/tiup-cluster-topology-reference.md @@ -709,6 +709,8 @@ tispark_workers: - `resource_control`: Resource control for the service. If this field is configured, the field content is merged with the `resource_control` content in `global` (if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified in `host`. The configuration rules of `resource_control` are the same as the `resource_control` content in `global`. +- `additional_scrape_conf`: Customized Prometheus scrape configuration. For more information, see [Customize Prometheus scrape configuration](/tiup/customized-montior-in-tiup-environment.md#customize-prometheus-scrape-configuration). + For the above fields, you cannot modify these configured fields after the deployment: - `host` From 626db3e273fe142cc552d6d7f9b3c7efb17cfe86 Mon Sep 17 00:00:00 2001 From: xixirangrang <35301108+hfxsd@users.noreply.github.com> Date: Mon, 22 Jan 2024 10:23:27 +0800 Subject: [PATCH 11/14] sync changes in cn --- tiup/customized-montior-in-tiup-environment.md | 2 +- tiup/tiup-cluster-topology-reference.md | 6 +++++- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/tiup/customized-montior-in-tiup-environment.md b/tiup/customized-montior-in-tiup-environment.md index c2f90850f9412..173e9a92e36e3 100644 --- a/tiup/customized-montior-in-tiup-environment.md +++ b/tiup/customized-montior-in-tiup-environment.md @@ -70,7 +70,7 @@ After the preceding configuration is done, when you deploy, scale out, scale in, action: drop ``` -After the preceding configuration is done, when you deploy, scale out, scale in, or reload a TiDB cluster, TiUP adds the `additional_scrape_conf` field to the corresponding parameters of the Prometheus configuration file. +After the preceding configuration is done, when you deploy, scale out, scale in, or reload a TiDB cluster, TiUP adds the content of the `additional_scrape_conf` field to the corresponding parameters of the Prometheus configuration file. ## Customize Grafana configurations diff --git a/tiup/tiup-cluster-topology-reference.md b/tiup/tiup-cluster-topology-reference.md index b7160e5ec7e69..7582176062f7b 100644 --- a/tiup/tiup-cluster-topology-reference.md +++ b/tiup/tiup-cluster-topology-reference.md @@ -751,7 +751,7 @@ tispark_workers: - `resource_control`: Resource control for the service. If this field is configured, the field content is merged with the `resource_control` content in `global` (if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified in `host`. The configuration rules of `resource_control` are the same as the `resource_control` content in `global`. -- `additional_scrape_conf`: Customized Prometheus scrape configuration. For more information, see [Customize Prometheus scrape configuration](/tiup/customized-montior-in-tiup-environment.md#customize-prometheus-scrape-configuration). +- `additional_scrape_conf`: Customized Prometheus scrape configuration. When you deploy, scale out, scale in, or reload a TiDB cluster, TiUP adds the content of the `additional_scrape_conf` field to the corresponding parameters of the Prometheus configuration file. For more information, see [Customize Prometheus scrape configuration](/tiup/customized-montior-in-tiup-environment.md#customize-prometheus-scrape-configuration). For the above fields, you cannot modify these configured fields after the deployment: @@ -810,6 +810,8 @@ monitoring_servers: - `resource_control`: Resource control for the service. If this field is configured, the field content is merged with the `resource_control` content in `global` (if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified in `host`. The configuration rules of `resource_control` are the same as the `resource_control` content in `global`. +- `config`: This field is used to add custom configurations to Grafana. When you deploy, scale out, scale in, or reload a TiDB cluster, TiUP adds the content of the `config` field to the Grafana configuration file `grafana.ini`.. For more information, see [Customize other Grafana configurations](/tiup/customized-montior-in-tiup-environment.md#customize-other-grafana-configurations). + > **Note:** > > If the `dashboard_dir` field of `grafana_servers` is configured, after executing the `tiup cluster rename` command to rename the cluster, you need to perform the following operations: @@ -861,6 +863,8 @@ grafana_servers: - `resource_control`: Resource control for the service. If this field is configured, the field content is merged with the `resource_control` content in `global` (if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified in `host`. The configuration rules of `resource_control` are the same as the `resource_control` content in `global`. +- `listen_host`: Specifies the listening address so that Alertmanager can be accessed through a proxy. It is recommended to configure it as `0.0.0.0`. For more information, see [Customize Alertmanager configurations](/tiup/customized-montior-in-tiup-environment.md#customize-alertmanager-configurations). + For the above fields, you cannot modify these configured fields after the deployment: - `host` From 6aeb5e62720eef6c83b521c70231d027a6d79982 Mon Sep 17 00:00:00 2001 From: xixirangrang <35301108+hfxsd@users.noreply.github.com> Date: Mon, 22 Jan 2024 10:24:40 +0800 Subject: [PATCH 12/14] Update tiup-cluster-topology-reference.md --- tiup/tiup-cluster-topology-reference.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tiup/tiup-cluster-topology-reference.md b/tiup/tiup-cluster-topology-reference.md index 7582176062f7b..3c92c647f6a53 100644 --- a/tiup/tiup-cluster-topology-reference.md +++ b/tiup/tiup-cluster-topology-reference.md @@ -810,7 +810,7 @@ monitoring_servers: - `resource_control`: Resource control for the service. If this field is configured, the field content is merged with the `resource_control` content in `global` (if the two fields overlap, the content of this field takes effect). Then, a systemd configuration file is generated and sent to the machine specified in `host`. The configuration rules of `resource_control` are the same as the `resource_control` content in `global`. -- `config`: This field is used to add custom configurations to Grafana. When you deploy, scale out, scale in, or reload a TiDB cluster, TiUP adds the content of the `config` field to the Grafana configuration file `grafana.ini`.. For more information, see [Customize other Grafana configurations](/tiup/customized-montior-in-tiup-environment.md#customize-other-grafana-configurations). +- `config`: This field is used to add custom configurations to Grafana. When you deploy, scale out, scale in, or reload a TiDB cluster, TiUP adds the content of the `config` field to the Grafana configuration file `grafana.ini`. For more information, see [Customize other Grafana configurations](/tiup/customized-montior-in-tiup-environment.md#customize-other-grafana-configurations). > **Note:** > From 35ebedd9a6c7d83ea20fd9260fdb6560756448dd Mon Sep 17 00:00:00 2001 From: xixirangrang <35301108+hfxsd@users.noreply.github.com> Date: Mon, 22 Jan 2024 10:28:04 +0800 Subject: [PATCH 13/14] Update customized-montior-in-tiup-environment.md --- tiup/customized-montior-in-tiup-environment.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tiup/customized-montior-in-tiup-environment.md b/tiup/customized-montior-in-tiup-environment.md index 173e9a92e36e3..57e5573a03f02 100644 --- a/tiup/customized-montior-in-tiup-environment.md +++ b/tiup/customized-montior-in-tiup-environment.md @@ -116,7 +116,7 @@ After the preceding configuration is done, when you deploy, scale out, scale in, smtp.skip_verify: true ``` -After the preceding configuration is done, when you deploy, scale out, scale in, or reload a TiDB cluster, TiUP adds the `config` field to the Grafana configuration file `grafana.ini`. +After the preceding configuration is done, when you deploy, scale out, scale in, or reload a TiDB cluster, TiUP adds the content of the `config` field to the Grafana configuration file `grafana.ini`. ## Customize Alertmanager configurations @@ -135,4 +135,4 @@ alertmanager_servers: ssh_port: 22 ``` -After the preceding configuration is done, when you deploy, scale out, scale in, or reload a TiDB cluster, TiUP adds the `listen_host` field to `--web.listen-address` in Alertmanager startup parameters. +After the preceding configuration is done, when you deploy, scale out, scale in, or reload a TiDB cluster, TiUP adds the content of the `listen_host` field to `--web.listen-address` in Alertmanager startup parameters. From 14160fff374390a9752843d229aebe16375be4f7 Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Mon, 22 Jan 2024 17:25:09 +0800 Subject: [PATCH 14/14] Apply suggestions from code review Co-authored-by: Aolin --- tiup/tiup-cluster-topology-reference.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tiup/tiup-cluster-topology-reference.md b/tiup/tiup-cluster-topology-reference.md index 3c92c647f6a53..2f7e850e0b59b 100644 --- a/tiup/tiup-cluster-topology-reference.md +++ b/tiup/tiup-cluster-topology-reference.md @@ -8,7 +8,7 @@ To deploy or scale TiDB using TiUP, you need to provide a topology file ([sample Similarly, to modify the cluster topology, you need to modify the topology file. The difference is that, after the cluster is deployed, you can only modify a part of the fields in the topology file. This document introduces each section of the topology file and each field in each section. -When you deploy a TiDB cluster using TiUP, TiUP also deploys monitoring servers, such as Prometheus, Grafana, and Alertmanager. In the meantime, if you scale out this cluster, TiUP also adds the new nodes into monitoring scope. To customize the configurations of the monitoring servers mentioned above, you can follow the instructions in [Customize Configurations of Monitoring Servers](/tiup/customized-montior-in-tiup-environment.md). +When you deploy a TiDB cluster using TiUP, TiUP also deploys monitoring servers, such as Prometheus, Grafana, and Alertmanager. In the meantime, if you scale out this cluster, TiUP also adds the new nodes into monitoring scope. To customize the configurations of the preceding monitoring servers, you can follow the instructions in [Customize Configurations of Monitoring Servers](/tiup/customized-montior-in-tiup-environment.md). ## File structure