diff --git a/media/tiproxy/tiproxy-balance-label.png b/media/tiproxy/tiproxy-balance-label.png new file mode 100644 index 0000000000000..c291c40ba05ce Binary files /dev/null and b/media/tiproxy/tiproxy-balance-label.png differ diff --git a/tiproxy/tiproxy-configuration.md b/tiproxy/tiproxy-configuration.md index 7a4b1ce2343a5..87242e84a17f5 100644 --- a/tiproxy/tiproxy-configuration.md +++ b/tiproxy/tiproxy-configuration.md @@ -112,6 +112,13 @@ Configurations for HTTP gateway. Configurations for the load balancing policy of TiProxy. +#### `label-name` + ++ Default value: `""` ++ Support hot-reload: yes ++ Specifies the label name used for [label-based load balancing](/tiproxy/tiproxy-load-balance.md#label-based-load-balancing). TiProxy matches the label values of TiDB servers based on this label name and prioritizes routing requests to TiDB servers with the same label value as itself. ++ The default value of `label-name` is an empty string, indicating that label-based load balancing is not used. To enable this load balancing policy, you need to set this configuration item to a non-empty string and configure both [`labels`](#labels) in TiProxy and [`labels`](/tidb-configuration-file.md#labels) in TiDB. For more information, see [Label-based load balancing](/tiproxy/tiproxy-load-balance.md#label-based-load-balancing). + #### `policy` + Default value: `resource` diff --git a/tiproxy/tiproxy-grafana.md b/tiproxy/tiproxy-grafana.md index b592b098910ce..2a8cf4c4a83db 100644 --- a/tiproxy/tiproxy-grafana.md +++ b/tiproxy/tiproxy-grafana.md @@ -16,7 +16,7 @@ TiProxy has four panel groups. The metrics on these panels indicate the current - **TiProxy-Server**: instance information. - **TiProxy-Query-Summary**: SQL query metrics like CPS. - **TiProxy-Backend**: information on TiDB nodes that TiProxy might connect to. -- **TiProxy-Balance**: loadbalance mertrics. +- **TiProxy-Balance**: load balancing metrics. ## Server @@ -59,6 +59,7 @@ TiProxy has four panel groups. The metrics on these panels indicate the current - Session Migration Duration: average, P95, P99 session migration duration. - Session Migration Reasons: the number of session migrations that happened every minute and the reason for them. The reasons include the following: - `status`: TiProxy performed [status-based load balancing](/tiproxy/tiproxy-load-balance.md#status-based-load-balancing). + - `label`: TiProxy performed [label-based load balancing](/tiproxy/tiproxy-load-balance.md#label-based-load-balancing). - `health`: TiProxy performed [health-based load balancing](/tiproxy/tiproxy-load-balance.md#health-based-load-balancing). - `memory`: TiProxy performed [memory-based load balancing](/tiproxy/tiproxy-load-balance.md#memory-based-load-balancing). - `cpu`: TiProxy performed [CPU-based load balancing](/tiproxy/tiproxy-load-balance.md#cpu-based-load-balancing). diff --git a/tiproxy/tiproxy-load-balance.md b/tiproxy/tiproxy-load-balance.md index 5e8b6395be193..d4ac60d04b6e5 100644 --- a/tiproxy/tiproxy-load-balance.md +++ b/tiproxy/tiproxy-load-balance.md @@ -5,16 +5,17 @@ summary: Introduce the load balancing policies in TiProxy and their applicable s # TiProxy Load Balancing Policies -TiProxy v1.0.0 only supports status-based and connection count-based load balancing policies for TiDB servers. Starting from v1.1.0, TiProxy introduces four additional load balancing policies that can be configured independently: health-based, memory-based, CPU-based, and location-based. +TiProxy v1.0.0 only supports status-based and connection count-based load balancing policies for TiDB servers. Starting from v1.1.0, TiProxy introduces five additional load balancing policies: label-based, health-based, memory-based, CPU-based, and location-based. -By default, TiProxy enables all policies with the following priorities: +By default, TiProxy applies these policies with the following priorities: 1. Status-based load balancing: when a TiDB server is shutting down, TiProxy migrates connections from that TiDB server to an online TiDB server. -2. Health-based load balancing: when the health of a TiDB server is abnormal, TiProxy migrates connections from that TiDB server to a healthy TiDB server. -3. Memory-based load balancing: when a TiDB server is at risk of running out of memory (OOM), TiProxy migrates connections from that TiDB server to a TiDB server with lower memory usage. -4. CPU-based load balancing: when the CPU usage of a TiDB server is much higher than that of other TiDB servers, TiProxy migrates connections from that TiDB server to a TiDB server with lower CPU usage. -5. Location-based load balancing: TiProxy prioritizes routing requests to the TiDB server geographically closest to TiProxy. -6. Connection count-based load balancing: when the connection count of a TiDB server is much higher than that of other TiDB servers, TiProxy migrates connections from that TiDB server to a TiDB server with fewer connections. +2. Label-based load balancing: TiProxy prioritizes routing requests to TiDB servers that share the same label as the TiProxy instance, enabling resource isolation at the computing layer. +3. Health-based load balancing: when the health of a TiDB server is abnormal, TiProxy migrates connections from that TiDB server to a healthy TiDB server. +4. Memory-based load balancing: when a TiDB server is at risk of running out of memory (OOM), TiProxy migrates connections from that TiDB server to a TiDB server with lower memory usage. +5. CPU-based load balancing: when the CPU usage of a TiDB server is much higher than that of other TiDB servers, TiProxy migrates connections from that TiDB server to a TiDB server with lower CPU usage. +6. Location-based load balancing: TiProxy prioritizes routing requests to the TiDB server geographically closest to TiProxy. +7. Connection count-based load balancing: when the connection count of a TiDB server is much higher than that of other TiDB servers, TiProxy migrates connections from that TiDB server to a TiDB server with fewer connections. To adjust the priorities of load balancing policies, see [Configure load balancing policies](#configure-load-balancing-policies). @@ -22,6 +23,68 @@ To adjust the priorities of load balancing policies, see [Configure load balanci TiProxy periodically checks whether a TiDB server is offline or shutting down using the SQL port and status port. +## Label-based load balancing + +Label-based load balancing prioritizes routing connections to TiDB servers that share the same label as TiProxy, enabling resource isolation at the computing layer. This policy is disabled by default and should only be enabled when your workload requires computing resource isolation. + +To enable label-based load balancing, you need to: + +- Specify the matching label name through [`balance.label-name`](/tiproxy/tiproxy-configuration.md#label-name). +- Configure the [`labels`](/tiproxy/tiproxy-configuration.md#labels) configuration item in TiProxy. +- Configure the [`labels`](/tidb-configuration-file.md#labels) configuration item in TiDB servers. + +After configuration, TiProxy uses the label name specified in `balance.label-name` to route connections to TiDB servers with matching label values. + +Consider an application that handles both transaction and BI workloads. To prevent these workloads from interfering with each other, configure your cluster as follows: + +1. Set [`balance.label-name`](/tiproxy/tiproxy-configuration.md#label-name) to `"app"` in TiProxy, indicating that TiDB servers will be matched by the label name `"app"`, and connections will be routed to TiDB servers with matching label values. +2. Configure two TiProxy instances, adding `"app"="Order"` and `"app"="BI"` to their respective [`labels`](/tiproxy/tiproxy-configuration.md#labels) configuration items. +3. Divide TiDB instances into two groups, adding `"app"="Order"` and `"app"="BI"` to their respective [`labels`](/tidb-configuration-file.md#labels) configuration items. +4. Optional: For storage layer isolation, configure [Placement Rules](/configure-placement-rules.md) or [Resource Control](/tidb-resource-control.md). +5. Direct transaction and BI clients to connect to their respective TiProxy instance addresses. + +Label-based Load Balancing + +Example configuration for this topology: + +```yaml +component_versions: + tiproxy: "v1.1.0" +server_configs: + tiproxy: + balance.label-name: "app" + tidb: + graceful-wait-before-shutdown: 15 +tiproxy_servers: + - host: tiproxy-host-1 + config: + labels: {app: "Order"} + - host: tiproxy-host-2 + config: + labels: {app: "BI"} +tidb_servers: + - host: tidb-host-1 + config: + labels: {app: "Order"} + - host: tidb-host-2 + config: + labels: {app: "Order"} + - host: tidb-host-3 + config: + labels: {app: "BI"} + - host: tidb-host-4 + config: + labels: {app: "BI"} +tikv_servers: + - host: tikv-host-1 + - host: tikv-host-2 + - host: tikv-host-3 +pd_servers: + - host: pd-host-1 + - host: pd-host-2 + - host: pd-host-3 +``` + ## Health-based load balancing TiProxy determines the health of a TiDB server by querying its error count. When the health of a TiDB server is abnormal while others are normal, TiProxy migrates connections from that server to a healthy TiDB server, achieving automatic failover. @@ -99,11 +162,12 @@ tidb_servers: zone: west tikv_servers: - host: tikv-host-1 - port: 20160 - host: tikv-host-2 - port: 20160 - host: tikv-host-3 - port: 20160 +pd_servers: + - host: pd-host-1 + - host: pd-host-2 + - host: pd-host-3 ``` In the preceding configuration, the TiProxy instance on `tiproxy-host-1` prioritizes routing requests to the TiDB server on `tidb-host-1` because `tiproxy-host-1` has the same `zone` configuration as `tidb-host-1`. Similarly, the TiProxy instance on `tiproxy-host-2` prioritizes routing requests to the TiDB server on `tidb-host-2`. @@ -121,9 +185,9 @@ Typically, TiProxy identifies the load on TiDB servers based on CPU usage. This TiProxy lets you configure the combination and priority of load balancing policies through the [`policy`](/tiproxy/tiproxy-configuration.md#policy) configuration item. -- `resource`: the resource priority policy performs load balancing based on the following priority order: status, health, memory, CPU, location, and connection count. -- `location`: the location priority policy performs load balancing based on the following priority order: status, location, health, memory, CPU, and connection count. -- `connection`: the minimum connection count policy performs load balancing based on the following priority order: status and connection count. +- `resource`: the resource priority policy performs load balancing based on the following priority order: status, label, health, memory, CPU, location, and connection count. +- `location`: the location priority policy performs load balancing based on the following priority order: status, label, location, health, memory, CPU, and connection count. +- `connection`: the minimum connection count policy performs load balancing based on the following priority order: status, label, and connection count. ## More resources