diff --git a/config-templates/simple-tiproxy.yaml b/config-templates/simple-tiproxy.yaml
index 2fd24cd01d3fa..5e5bb5b28f0ea 100644
--- a/config-templates/simple-tiproxy.yaml
+++ b/config-templates/simple-tiproxy.yaml
@@ -5,6 +5,12 @@ global:
ssh_port: 22
deploy_dir: "/tidb-deploy"
data_dir: "/tidb-data"
+component_versions:
+ tiproxy: "v1.2.0"
+server_configs:
+ tiproxy:
+ ha.virtual-ip: "10.0.1.10/24"
+ ha.interface: "eth0"
pd_servers:
- host: 10.0.1.1
@@ -23,6 +29,7 @@ tikv_servers:
tiproxy_servers:
- host: 10.0.1.11
+ - host: 10.0.1.12
monitoring_servers:
- host: 10.0.1.13
diff --git a/tiproxy/tiproxy-configuration.md b/tiproxy/tiproxy-configuration.md
index e1091c377765c..079d1cdffa264 100644
--- a/tiproxy/tiproxy-configuration.md
+++ b/tiproxy/tiproxy-configuration.md
@@ -15,8 +15,9 @@ max-connections = 100
[api]
addr = "0.0.0.0:3080"
-[log]
-level = "info"
+[ha]
+virtual-ip = "10.0.1.10/24"
+interface = "eth0"
[security]
[security.cluster-tls]
@@ -118,6 +119,28 @@ Configurations for the load balancing policy of TiProxy.
+ Possible values: `resource`, `location`, `connection`
+ Specifies the load balancing policy. For the meaning of each possible value, see [TiProxy load balancing policies](/tiproxy/tiproxy-load-balance.md#configure-load-balancing-policies).
+### ha
+
+High availability configurations for TiProxy.
+
+#### `virtual-ip`
+
++ Default value: `""`
++ Support hot-reload: no
++ Specifies the virtual IP address in the CIDR format, such as `"10.0.1.10/24"`. In a cluster with multiple TiProxy instances, only one instance binds to the virtual IP. If this instance goes offline, another TiProxy instance will automatically bind to the IP, ensuring clients can always connect to an available TiProxy through the virtual IP.
+
+> **Note:**
+>
+> - Virtual IP is only supported on Linux operating systems.
+> - The Linux user running TiProxy must have permission to bind IP addresses.
+> - The virtual IP and the IPs of all TiProxy instances must be within the same CIDR range.
+
+#### `interface`
+
++ Default value: `""`
++ Support hot-reload: no
++ Specifies the network interface to bind the virtual IP to, such as `"eth0"`. The virtual IP will be bound to a TiProxy instance only when both [`ha.virtual-ip`](#virtual-ip) and `ha.interface` are set.
+
### `labels`
+ Default value: `{}`
diff --git a/tiproxy/tiproxy-deployment-topology.md b/tiproxy/tiproxy-deployment-topology.md
index 22178eb203480..ccee9f4f5d7f6 100644
--- a/tiproxy/tiproxy-deployment-topology.md
+++ b/tiproxy/tiproxy-deployment-topology.md
@@ -16,7 +16,7 @@ TiProxy is a L7 proxy server for TiDB, which can balance connections and migrate
| TiDB | 3 | 16 VCore 32GB * 3 | 10.0.1.4
10.0.1.5
10.0.1.6 | Default port
Global directory configuration |
| PD | 3 | 4 VCore 8GB * 3 | 10.0.1.1
10.0.1.2
10.0.1.3 | Default port
Global directory configuration |
| TiKV | 3 | 16 VCore 32GB 2TB (nvme ssd) * 3 | 10.0.1.7
10.0.1.8
10.0.1.9 | Default port
Global directory configuration |
-| TiProxy | 1 | 4 VCore 8 GB * 1 | 10.0.1.11 | Default port
Global directory configuration |
+| TiProxy | 2 | 4 VCore 8 GB * 1 | 10.0.1.11
10.0.1.12 | Default port
Global directory configuration |
| Monitoring & Grafana | 1 | 4 VCore 8GB * 1 500GB (ssd) | 10.0.1.13 | Default port
Global directory configuration |
### Topology templates
diff --git a/tiproxy/tiproxy-grafana.md b/tiproxy/tiproxy-grafana.md
index bc41b1e8265a2..b592b098910ce 100644
--- a/tiproxy/tiproxy-grafana.md
+++ b/tiproxy/tiproxy-grafana.md
@@ -38,6 +38,9 @@ TiProxy has four panel groups. The metrics on these panels indicate the current
- backend network break: fails to read from or write to the TiDB. This may be caused by a network problem or the TiDB server shutting down
- backend handshake fail: TiProxy fails to handshake with the TiDB server
- Goroutine Count: the number of Goroutines on each TiProxy instance
+- Owner: the TiProxy instance that executes various tasks. For example, `10.24.31.1:3080 - vip` indicates that the TiProxy instance at `10.24.31.1:3080` is bound to a virtual IP. The tasks include the following:
+ - vip: binds a virtual IP
+ - metric_reader: reads monitoring data from TiDB servers
## Query-Summary
diff --git a/tiproxy/tiproxy-load-balance.md b/tiproxy/tiproxy-load-balance.md
index aa9cf327a30ee..5e8b6395be193 100644
--- a/tiproxy/tiproxy-load-balance.md
+++ b/tiproxy/tiproxy-load-balance.md
@@ -16,10 +16,7 @@ By default, TiProxy enables all policies with the following priorities:
5. Location-based load balancing: TiProxy prioritizes routing requests to the TiDB server geographically closest to TiProxy.
6. Connection count-based load balancing: when the connection count of a TiDB server is much higher than that of other TiDB servers, TiProxy migrates connections from that TiDB server to a TiDB server with fewer connections.
-> **Note:**
->
-> - Health-based, memory-based, and CPU-based load balancing policies depend on [Prometheus](https://prometheus.io). Ensure that Prometheus is available. Otherwise, these policies do not take effect.
-> - To adjust the priorities of load balancing policies, see [Configure load balancing policies](#configure-load-balancing-policies).
+To adjust the priorities of load balancing policies, see [Configure load balancing policies](#configure-load-balancing-policies).
## Status-based load balancing
@@ -27,7 +24,7 @@ TiProxy periodically checks whether a TiDB server is offline or shutting down us
## Health-based load balancing
-TiProxy determines the health of a TiDB server by querying its error count from Prometheus. When the health of a TiDB server is abnormal while others are normal, TiProxy migrates connections from that server to a healthy TiDB server, achieving automatic failover.
+TiProxy determines the health of a TiDB server by querying its error count. When the health of a TiDB server is abnormal while others are normal, TiProxy migrates connections from that server to a healthy TiDB server, achieving automatic failover.
This policy is suitable for the following scenarios:
@@ -36,7 +33,7 @@ This policy is suitable for the following scenarios:
## Memory-based load balancing
-TiProxy queries the memory usage of TiDB servers from Prometheus. When the memory usage of a TiDB server is rapidly increasing or reaching a high level, TiProxy migrates connections from that server to a TiDB server with lower memory usage, preventing unnecessary connection termination due to OOM. TiProxy does not guarantee identical memory usage across TiDB servers. This policy only takes effect when a TiDB server is at risk of OOM.
+TiProxy queries the memory usage of TiDB servers. When the memory usage of a TiDB server is rapidly increasing or reaching a high level, TiProxy migrates connections from that server to a TiDB server with lower memory usage, preventing unnecessary connection termination due to OOM. TiProxy does not guarantee identical memory usage across TiDB servers. This policy only takes effect when a TiDB server is at risk of OOM.
When a TiDB server is at risk of OOM, TiProxy attempts to migrate all connections from it. Usually, if OOM is caused by runaway queries, ongoing runaway queries will not be migrated to another TiDB server for re-execution, because these connections can only be migrated after the transaction is complete.
@@ -48,7 +45,7 @@ This policy has the following limitations:
## CPU-based load balancing
-TiProxy queries the CPU usage of TiDB servers from Prometheus and migrates connections from a TiDB server with high CPU usage to a server with lower usage, reducing overall query latency. TiProxy does not guarantee identical CPU usage across TiDB servers but ensures that the CPU usage differences are minimized.
+TiProxy queries the CPU usage of TiDB servers and migrates connections from a TiDB server with high CPU usage to a server with lower usage, reducing overall query latency. TiProxy does not guarantee identical CPU usage across TiDB servers but ensures that the CPU usage differences are minimized.
This policy is suitable for the following scenarios:
diff --git a/tiproxy/tiproxy-overview.md b/tiproxy/tiproxy-overview.md
index 1ca43e8ea514f..5ae207e7b77d4 100644
--- a/tiproxy/tiproxy-overview.md
+++ b/tiproxy/tiproxy-overview.md
@@ -40,7 +40,7 @@ When a TiDB server performs scaling in or scaling out, if you use a common load
### Quick deployment
-TiProxy is integrated into [TiUP](https://github.com/pingcap/tiup), [TiDB Operator](https://github.com/pingcap/tidb-operator), [TiDB Dashboard](/dashboard/dashboard-intro.md), and [Grafana](/tiproxy/tiproxy-grafana.md), which reduces the deployment, operation, and management costs.
+TiProxy is integrated into [TiUP](https://github.com/pingcap/tiup), [TiDB Operator](https://github.com/pingcap/tidb-operator), [TiDB Dashboard](/dashboard/dashboard-intro.md), and [Grafana](/tiproxy/tiproxy-grafana.md), and supports built-in virtual IP management, reducing the deployment, operation, and management costs.
## User scenarios
@@ -91,7 +91,7 @@ This section describes how to deploy and change TiProxy using TiUP. For how to d
3. Configure the TiProxy instances.
- To ensure the high availability of TiProxy, it is recommended to deploy at least two TiProxy instances. You can use hardware load balancers to distribute traffic to each TiProxy instance, or configure virtual IP to route the traffic to the available TiProxy instance.
+ To ensure the high availability of TiProxy, it is recommended to deploy at least two TiProxy instances and configure a virtual IP by setting [`ha.virtual-ip`](/tiproxy/tiproxy-configuration.md#virtual-ip) and [`ha.interface`](/tiproxy/tiproxy-configuration.md#interface) to route the traffic to the available TiProxy instance.
When selecting the model and number of TiProxy instances, consider the following factors:
@@ -106,12 +106,14 @@ This section describes how to deploy and change TiProxy using TiUP. For how to d
```yaml
component_versions:
- tiproxy: "v1.0.0"
+ tiproxy: "v1.2.0"
server_configs:
tiproxy:
security.server-tls.ca: "/var/ssl/ca.pem"
security.server-tls.cert: "/var/ssl/cert.pem"
security.server-tls.key: "/var/ssl/key.pem"
+ ha.virtual-ip: "10.0.1.10/24"
+ ha.interface: "eth0"
```
4. Start the cluster.