diff --git a/RELEASE-NOTES.md b/RELEASE-NOTES.md
index 8620c4dccfbabf..dd1830eaa87c41 100644
--- a/RELEASE-NOTES.md
+++ b/RELEASE-NOTES.md
@@ -19,6 +19,7 @@
1. Proxy: Add query parameters and check for mysql kill processId - [#33274](https://github.com/apache/shardingsphere/pull/33274)
1. Agent: Simplify the use of Agent's Docker Image - [#33356](https://github.com/apache/shardingsphere/pull/33356)
1. Build: Avoid using `-proc:full` when compiling ShardingSphere with OpenJDK23 - [#33681](https://github.com/apache/shardingsphere/pull/33681)
+1. Doc: Adds documentation for HiveServer2 support - [#33717](https://github.com/apache/shardingsphere/pull/33717)
### Bug Fixes
diff --git a/docs/document/content/user-manual/shardingsphere-jdbc/graalvm-native-image/_index.cn.md b/docs/document/content/user-manual/shardingsphere-jdbc/graalvm-native-image/_index.cn.md
index 661c116ab3fdb3..d4f2a24d928af4 100644
--- a/docs/document/content/user-manual/shardingsphere-jdbc/graalvm-native-image/_index.cn.md
+++ b/docs/document/content/user-manual/shardingsphere-jdbc/graalvm-native-image/_index.cn.md
@@ -289,86 +289,9 @@ Caused by: java.io.UnsupportedEncodingException: Codepage Cp1252 is not supporte
ClickHouse 不支持 ShardingSphere 集成级别的本地事务,XA 事务和 Seata AT 模式事务,更多讨论位于 https://github.com/ClickHouse/clickhouse-docs/issues/2300 。
-7. 当需要通过 ShardingSphere JDBC 使用 Hive 方言时,受 https://issues.apache.org/jira/browse/HIVE-28445 影响,
-用户不应该使用 `classifier` 为 `standalone` 的 `org.apache.hive:hive-jdbc:4.0.1`,以避免依赖冲突。
-可能的配置例子如下,
-
-```xml
-
-
-
- org.apache.shardingsphere
- shardingsphere-jdbc
- ${shardingsphere.version}
-
-
- org.apache.shardingsphere
- shardingsphere-infra-database-hive
- ${shardingsphere.version}
-
-
- org.apache.shardingsphere
- shardingsphere-parser-sql-hive
- ${shardingsphere.version}
-
-
- org.apache.hive
- hive-jdbc
- 4.0.1
-
-
- org.apache.hive
- hive-service
- 4.0.1
-
-
- org.apache.hadoop
- hadoop-client-api
- 3.3.6
-
-
-
-```
-
-这会导致大量的依赖冲突。
-如果用户不希望手动解决潜在的数千行的依赖冲突,可以使用 HiveServer2 JDBC Driver 的 `Thin JAR` 的第三方构建。
-可能的配置例子如下,
-
-```xml
-
-
-
- org.apache.shardingsphere
- shardingsphere-jdbc
- ${shardingsphere.version}
-
-
- org.apache.shardingsphere
- shardingsphere-infra-database-hive
- ${shardingsphere.version}
-
-
- org.apache.shardingsphere
- shardingsphere-parser-sql-hive
- ${shardingsphere.version}
-
-
- io.github.linghengqian
- hive-server2-jdbc-driver-thin
- 1.5.0
-
-
- com.fasterxml.woodstox
- woodstox-core
-
-
-
-
-
-```
-
-受 https://github.com/grpc/grpc-java/issues/10601 影响,用户如果在项目中引入了 `org.apache.hive:hive-jdbc`,
+7. 受 https://github.com/grpc/grpc-java/issues/10601 影响,用户如果在项目中引入了 `org.apache.hive:hive-jdbc`,
则需要在项目的 classpath 的 `META-INF/native-image/io.grpc/grpc-netty-shaded` 文件夹下创建包含如下内容的文件 `native-image.properties`,
+
```properties
Args=--initialize-at-run-time=\
io.grpc.netty.shaded.io.netty.channel.ChannelHandlerMask,\
@@ -400,55 +323,6 @@ Args=--initialize-at-run-time=\
io.grpc.netty.shaded.io.netty.util.AttributeKey
```
-为了能够使用 `delete` 等 DML SQL 语句,当连接到 HiveServer2 时,
-用户应当考虑在 ShardingSphere JDBC 中仅使用支持 ACID 的表。`apache/hive` 提供了多种事务解决方案。
-
-第1种选择是使用 ACID 表,可能的建表流程如下。
-由于其过时的基于目录的表格式,用户可能不得不在 DML 语句执行前后进行等待,以让 HiveServer2 完成低效的 DML 操作。
-
-```sql
-set metastore.compactor.initiator.on=true;
-set metastore.compactor.cleaner.on=true;
-set metastore.compactor.worker.threads=5;
-
-set hive.support.concurrency=true;
-set hive.exec.dynamic.partition.mode=nonstrict;
-set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
-
-CREATE TABLE IF NOT EXISTS t_order
-(
- order_id BIGINT,
- order_type INT,
- user_id INT NOT NULL,
- address_id BIGINT NOT NULL,
- status VARCHAR(50),
- PRIMARY KEY (order_id) disable novalidate
-) CLUSTERED BY (order_id) INTO 2 BUCKETS STORED AS ORC TBLPROPERTIES ('transactional' = 'true');
-```
-
-第2种选择是使用 Iceberg 表,可能的建表流程如下。
-Apache Iceberg 表格式有望在未来几年取代传统的 Hive 表格式,
-参考 https://blog.cloudera.com/from-hive-tables-to-iceberg-tables-hassle-free/ 。
-
-```sql
-set iceberg.mr.schema.auto.conversion=true;
-
-CREATE TABLE IF NOT EXISTS t_order
-(
- order_id BIGINT,
- order_type INT,
- user_id INT NOT NULL,
- address_id BIGINT NOT NULL,
- status VARCHAR(50),
- PRIMARY KEY (order_id) disable novalidate
-) STORED BY ICEBERG STORED AS ORC TBLPROPERTIES ('format-version' = '2');
-```
-
-由于 HiveServer2 JDBC Driver 未实现 `java.sql.DatabaseMetaData#getURL()`,
-ShardingSphere 做了模糊处理,因此用户暂时仅可通过 HikariCP 连接 HiveServer2。
-
-HiveServer2 不支持 ShardingSphere 集成级别的本地事务,XA 事务和 Seata AT 模式事务,更多讨论位于 https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions 。
-
8. 由于 https://github.com/oracle/graal/issues/7979 的影响,
对应 `com.oracle.database.jdbc:ojdbc8` Maven 模块的 Oracle JDBC Driver 无法在 GraalVM Native Image 下使用。
diff --git a/docs/document/content/user-manual/shardingsphere-jdbc/graalvm-native-image/_index.en.md b/docs/document/content/user-manual/shardingsphere-jdbc/graalvm-native-image/_index.en.md
index 4783a5a43564f8..38bfea154192dc 100644
--- a/docs/document/content/user-manual/shardingsphere-jdbc/graalvm-native-image/_index.en.md
+++ b/docs/document/content/user-manual/shardingsphere-jdbc/graalvm-native-image/_index.en.md
@@ -302,88 +302,10 @@ Possible configuration examples are as follows,
ClickHouse does not support local transactions, XA transactions, and Seata AT mode transactions at the ShardingSphere integration level.
More discussion is at https://github.com/ClickHouse/clickhouse-docs/issues/2300 .
-7. When using the Hive dialect through ShardingSphere JDBC, affected by https://issues.apache.org/jira/browse/HIVE-28445 ,
- users should not use `org.apache.hive:hive-jdbc:4.0.1` with `classifier` as `standalone` to avoid dependency conflicts.
- Possible configuration examples are as follows,
-
-```xml
-
-
-
- org.apache.shardingsphere
- shardingsphere-jdbc
- ${shardingsphere.version}
-
-
- org.apache.shardingsphere
- shardingsphere-infra-database-hive
- ${shardingsphere.version}
-
-
- org.apache.shardingsphere
- shardingsphere-parser-sql-hive
- ${shardingsphere.version}
-
-
- org.apache.hive
- hive-jdbc
- 4.0.1
-
-
- org.apache.hive
- hive-service
- 4.0.1
-
-
- org.apache.hadoop
- hadoop-client-api
- 3.3.6
-
-
-
-```
-
-This can lead to a large number of dependency conflicts.
-If the user does not want to manually resolve potentially thousands of lines of dependency conflicts,
-a third-party build of the HiveServer2 JDBC Driver `Thin JAR` can be used.
-An example of a possible configuration is as follows,
-
-```xml
-
-
-
- org.apache.shardingsphere
- shardingsphere-jdbc
- ${shardingsphere.version}
-
-
- org.apache.shardingsphere
- shardingsphere-infra-database-hive
- ${shardingsphere.version}
-
-
- org.apache.shardingsphere
- shardingsphere-parser-sql-hive
- ${shardingsphere.version}
-
-
- io.github.linghengqian
- hive-server2-jdbc-driver-thin
- 1.5.0
-
-
- com.fasterxml.woodstox
- woodstox-core
-
-
-
-
-
-```
-
-Affected by https://github.com/grpc/grpc-java/issues/10601 , should users incorporate `org.apache.hive:hive-service` into their project,
+7. Affected by https://github.com/grpc/grpc-java/issues/10601 , should users incorporate `org.apache.hive:hive-jdbc` into their project,
it is imperative to create a file named `native-image.properties` within the directory `META-INF/native-image/io.grpc/grpc-netty-shaded` of the classpath,
containing the following content,
+
```properties
Args=--initialize-at-run-time=\
io.grpc.netty.shaded.io.netty.channel.ChannelHandlerMask,\
@@ -415,57 +337,6 @@ Args=--initialize-at-run-time=\
io.grpc.netty.shaded.io.netty.util.AttributeKey
```
-In order to be able to use DML SQL statements such as `delete`, when connecting to HiveServer2,
-users should consider using only ACID-supported tables in ShardingSphere JDBC. `apache/hive` provides a variety of transaction solutions.
-
-The first option is to use ACID tables, and the possible table creation process is as follows.
-Due to its outdated catalog-based table format,
-users may have to wait before and after DML statement execution to let HiveServer2 complete the inefficient DML operations.
-
-```sql
-set metastore.compactor.initiator.on=true;
-set metastore.compactor.cleaner.on=true;
-set metastore.compactor.worker.threads=5;
-
-set hive.support.concurrency=true;
-set hive.exec.dynamic.partition.mode=nonstrict;
-set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
-
-CREATE TABLE IF NOT EXISTS t_order
-(
- order_id BIGINT,
- order_type INT,
- user_id INT NOT NULL,
- address_id BIGINT NOT NULL,
- status VARCHAR(50),
- PRIMARY KEY (order_id) disable novalidate
-) CLUSTERED BY (order_id) INTO 2 BUCKETS STORED AS ORC TBLPROPERTIES ('transactional' = 'true');
-```
-
-The second option is to use Iceberg table. The possible table creation process is as follows.
-Apache Iceberg table format is poised to replace the traditional Hive table format in the coming years,
-see https://blog.cloudera.com/from-hive-tables-to-iceberg-tables-hassle-free/ .
-
-```sql
-set iceberg.mr.schema.auto.conversion=true;
-
-CREATE TABLE IF NOT EXISTS t_order
-(
- order_id BIGINT,
- order_type INT,
- user_id INT NOT NULL,
- address_id BIGINT NOT NULL,
- status VARCHAR(50),
- PRIMARY KEY (order_id) disable novalidate
-) STORED BY ICEBERG STORED AS ORC TBLPROPERTIES ('format-version' = '2');
-```
-
-Since HiveServer2 JDBC Driver does not implement `java.sql.DatabaseMetaData#getURL()`,
-ShardingSphere has done some obfuscation, so users can only connect to HiveServer2 through HikariCP for now.
-
-HiveServer2 does not support local transactions, XA transactions, and Seata AT mode transactions at the ShardingSphere integration level.
-More discussion is available at https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions .
-
8. Due to https://github.com/oracle/graal/issues/7979 ,
the Oracle JDBC Driver corresponding to the `com.oracle.database.jdbc:ojdbc8` Maven module cannot be used under GraalVM Native Image.
diff --git a/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/hiveserver2/_index.cn.md b/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/hiveserver2/_index.cn.md
new file mode 100644
index 00000000000000..9b2db9e5f09a06
--- /dev/null
+++ b/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/hiveserver2/_index.cn.md
@@ -0,0 +1,284 @@
++++
+title = "HiveServer2"
+weight = 6
++++
+
+## 背景信息
+
+ShardingSphere 默认情况下不提供对 `org.apache.hive.jdbc.HiveDriver` 的 `driverClassName` 的支持。
+ShardingSphere 对 HiveServer2 JDBC Driver 的支持位于可选模块中。
+
+## 前提条件
+
+要在 ShardingSphere 的配置文件为数据节点使用类似 `jdbc:hive2://localhost:10000/` 的 `jdbcUrl`,
+可能的 Maven 依赖关系如下,
+
+```xml
+
+
+ org.apache.shardingsphere
+ shardingsphere-jdbc
+ ${shardingsphere.version}
+
+
+ org.apache.shardingsphere
+ shardingsphere-infra-database-hive
+ ${shardingsphere.version}
+
+
+ org.apache.shardingsphere
+ shardingsphere-parser-sql-hive
+ ${shardingsphere.version}
+
+
+ org.apache.hive
+ hive-jdbc
+ 4.0.1
+
+
+ org.apache.hive
+ hive-service
+ 4.0.1
+
+
+ org.apache.hadoop
+ hadoop-client-api
+ 3.3.6
+
+
+```
+
+### 可选的解决依赖冲突的捷径
+
+直接使用 `org.apache.hive:hive-jdbc:4.0.1` 会导致大量的依赖冲突。
+如果用户不希望手动解决潜在的数千行的依赖冲突,可以使用 HiveServer2 JDBC Driver 的 Thin JAR 的第三方构建。
+可能的配置例子如下,
+
+```xml
+
+
+ org.apache.shardingsphere
+ shardingsphere-jdbc
+ ${shardingsphere.version}
+
+
+ org.apache.shardingsphere
+ shardingsphere-infra-database-hive
+ ${shardingsphere.version}
+
+
+ org.apache.shardingsphere
+ shardingsphere-parser-sql-hive
+ ${shardingsphere.version}
+
+
+ io.github.linghengqian
+ hive-server2-jdbc-driver-thin
+ 1.5.0
+
+
+ com.fasterxml.woodstox
+ woodstox-core
+
+
+
+
+```
+
+## 配置示例
+
+### 启动 HiveServer2
+
+编写 Docker Compose 文件来启动 HiveServer2。
+
+```yaml
+services:
+ hive-server2:
+ image: apache/hive:4.0.1
+ environment:
+ SERVICE_NAME: hiveserver2
+ ports:
+ - "10000:10000"
+ expose:
+ - 10002
+```
+
+### 创建业务表
+
+通过第三方工具在 HiveServer2 内创建业务库与业务表。
+以 DBeaver CE 为例,使用 `jdbc:hive2://localhost:10000/` 的 `jdbcUrl` 连接至 HiveServer2,`username` 和 `password` 留空。
+
+```sql
+CREATE DATABASE demo_ds_0;
+CREATE DATABASE demo_ds_1;
+CREATE DATABASE demo_ds_2;
+```
+
+分别使用 `jdbc:hive2://localhost:10000/demo_ds_0` ,
+`jdbc:hive2://localhost:10000/demo_ds_1` 和 `jdbc:hive2://localhost:10000/demo_ds_2` 的 `jdbcUrl` 连接至 HiveServer2 来执行如下 SQL,
+
+```sql
+set iceberg.mr.schema.auto.conversion=true;
+
+CREATE TABLE IF NOT EXISTS t_address
+(
+ address_id BIGINT NOT NULL,
+ address_name VARCHAR(100) NOT NULL,
+ PRIMARY KEY (address_id) disable novalidate
+) STORED BY ICEBERG STORED AS ORC TBLPROPERTIES ('format-version' = '2');
+
+TRUNCATE TABLE t_address;
+```
+
+### 在业务项目创建 ShardingSphere 数据源
+
+在业务项目引入`前提条件`涉及的依赖后,在业务项目的 classpath 上编写 ShardingSphere 数据源的配置文件`demo.yaml`,
+
+```yaml
+dataSources:
+ ds_0:
+ dataSourceClassName: com.zaxxer.hikari.HikariDataSource
+ driverClassName: org.apache.hive.jdbc.HiveDriver
+ jdbcUrl: jdbc:hive2://localhost:10000/demo_ds_0
+ ds_1:
+ dataSourceClassName: com.zaxxer.hikari.HikariDataSource
+ driverClassName: org.apache.hive.jdbc.HiveDriver
+ jdbcUrl: jdbc:hive2://localhost:10000/demo_ds_1
+ ds_2:
+ dataSourceClassName: com.zaxxer.hikari.HikariDataSource
+ driverClassName: org.apache.hive.jdbc.HiveDriver
+ jdbcUrl: jdbc:hive2://localhost:10000/demo_ds_2
+rules:
+- !BROADCAST
+ tables:
+ - t_address
+```
+
+### 享受集成
+
+创建 ShardingSphere 的数据源,
+
+```java
+import com.zaxxer.hikari.HikariConfig;
+import com.zaxxer.hikari.HikariDataSource;
+import javax.sql.DataSource;
+public class ExampleUtils {
+ DataSource createDataSource() {
+ HikariConfig config = new HikariConfig();
+ config.setJdbcUrl("jdbc:shardingsphere:classpath:demo.yaml");
+ config.setDriverClassName("org.apache.shardingsphere.driver.ShardingSphereDriver");
+ return new HikariDataSource(config);
+ }
+}
+```
+
+可直接在此`javax.sql.DataSource`相关的 ShardingSphere DataSource 上执行逻辑 SQL,享受它,
+
+```sql
+INSERT INTO t_address (address_id, address_name) VALUES (1, "address_test_1");
+DELETE FROM t_address WHERE address_id=1;
+```
+
+## 使用限制
+
+### 版本限制
+
+HiveServer2 `2.x` 和 HiveServer2 `3.x` 发行版的生命周期已经结束。
+参考 https://lists.apache.org/thread/0mh4hvpllzv877bkx1f9srv1c3hlbtt9 和 https://lists.apache.org/thread/mpzrv7v1hqqo4cmp0zorswnbvd7ltmbp 。
+ShardingSphere 仅针对 HiveServer2 `4.0.1` 进行集成测试。
+
+### HiveServer2 JDBC Driver 的 Uber JAR 限制
+
+受 https://issues.apache.org/jira/browse/HIVE-28445 影响,
+用户不应该使用 `classifier` 为 `standalone` 的 `org.apache.hive:hive-jdbc:4.0.1`,以避免依赖冲突。
+
+### 嵌入式 HiveServer2 限制
+
+嵌入式 HiveServer2 不再被 Hive 社区认为是用户友好的,用户不应该尝试通过 ShardingSphere 的配置文件启动 嵌入式 HiveServer2。
+用户总应该通过 HiveServer2 的 Docker Image `apache/hive:4.0.1` 启动 HiveServer2。
+参考 https://issues.apache.org/jira/browse/HIVE-28418 。
+
+### Hadoop 限制
+
+用户仅可使用 Hadoop `3.3.6` 来作为 HiveServer2 JDBC Driver `4.0.1` 的底层 Hadoop 依赖。
+HiveServer2 JDBC Driver `4.0.1` 不支持 Hadoop `3.4.1`,
+参考 https://github.com/apache/hive/pull/5500 。
+
+### 数据库连接池限制
+
+由于 HiveServer2 JDBC Driver 未实现 `java.sql.DatabaseMetaData#getURL()`,
+ShardingSphere 在`org.apache.shardingsphere.infra.database.DatabaseTypeEngine#getStorageType(javax.sql.DataSource)`处做了模糊处理,
+因此用户暂时仅可通过 `com.zaxxer.hikari.HikariDataSource` 的数据库连接池连接 HiveServer2。
+
+若用户需要通过 `com.alibaba.druid.pool.DruidDataSource` 的数据库连接池连接 HiveServer2,
+用户应当考虑在 Hive 的主分支实现 `java.sql.DatabaseMetaData#getURL()`,
+而不是尝试修改 ShardingSphere 的内部类。
+
+### SQL 限制
+
+ShardingSphere JDBC DataSource 尚不支持执行 HiveServer2 的 `SET` 语句,`CREATE TABLE` 语句和 `TRUNCATE TABLE` 语句。
+
+用户应考虑为 ShardingSphere 提交包含单元测试的 PR。
+
+### jdbcURL 限制
+
+对于 ShardingSphere 的配置文件,对 HiveServer2 的 jdbcURL 存在限制。引入前提,
+HiveServer2 的 jdbcURL 格式为 `jdbc:hive2://:,:/dbName;initFile=;sess_var_list?hive_conf_list#hive_var_list`。
+ShardingSphere 当前对参数的解析仅支持以`jdbc:hive2://localhost:10000/demo_ds_1;initFile=/tmp/init.sql`为代表的`;hive_conf_list`部分。
+
+若用户需使用`;sess_var_list`或`#hive_var_list`的 jdbcURL 参数,考虑为 ShardingSphere 提交包含单元测试的 PR。
+
+### 在 ShardingSphere 数据源上使用 DML SQL 语句的前提条件
+
+为了能够使用 `delete` 等 DML SQL 语句,当连接到 HiveServer2 时,用户应当考虑在 ShardingSphere JDBC 中仅使用支持 ACID 的表。
+`apache/hive` 提供了多种事务解决方案。
+
+第1种选择是使用 ACID 表,可能的建表流程如下。
+由于其过时的基于目录的表格式,用户可能不得不在 DML 语句执行前后进行等待,以让 HiveServer2 完成低效的 DML 操作。
+
+```sql
+set metastore.compactor.initiator.on=true;
+set metastore.compactor.cleaner.on=true;
+set metastore.compactor.worker.threads=5;
+
+set hive.support.concurrency=true;
+set hive.exec.dynamic.partition.mode=nonstrict;
+set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
+
+CREATE TABLE IF NOT EXISTS t_order
+(
+ order_id BIGINT,
+ order_type INT,
+ user_id INT NOT NULL,
+ address_id BIGINT NOT NULL,
+ status VARCHAR(50),
+ PRIMARY KEY (order_id) disable novalidate
+) CLUSTERED BY (order_id) INTO 2 BUCKETS STORED AS ORC TBLPROPERTIES ('transactional' = 'true');
+```
+
+第2种选择是使用 Iceberg 表,可能的建表流程如下。Apache Iceberg 表格式有望在未来几年取代传统的 Hive 表格式,
+参考 https://blog.cloudera.com/from-hive-tables-to-iceberg-tables-hassle-free/ 。
+
+```sql
+set iceberg.mr.schema.auto.conversion=true;
+
+CREATE TABLE IF NOT EXISTS t_order
+(
+ order_id BIGINT,
+ order_type INT,
+ user_id INT NOT NULL,
+ address_id BIGINT NOT NULL,
+ status VARCHAR(50),
+ PRIMARY KEY (order_id) disable novalidate
+) STORED BY ICEBERG STORED AS ORC TBLPROPERTIES ('format-version' = '2');
+```
+
+### 事务限制
+
+HiveServer2 不支持 ShardingSphere 集成级别的本地事务,XA 事务或 Seata 的 AT 模式事务,
+更多讨论位于 https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions 。
+
+### DBeaver CE 限制
+
+当用户使用 DBeaver CE 连接至 HiveServer2 时,需确保 DBeaver CE 版本大于或等于 `24.2.5`。
+参考 https://github.com/dbeaver/dbeaver/pull/35059 。
diff --git a/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/hiveserver2/_index.en.md b/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/hiveserver2/_index.en.md
new file mode 100644
index 00000000000000..c83669c877a266
--- /dev/null
+++ b/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/hiveserver2/_index.en.md
@@ -0,0 +1,6 @@
++++
+title = "HiveServer2"
+weight = 6
++++
+
+TODO.
diff --git a/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/testcontainers/_index.cn.md b/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/testcontainers/_index.cn.md
index 1ff66f3ce460b7..6f011d9187a340 100644
--- a/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/testcontainers/_index.cn.md
+++ b/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/testcontainers/_index.cn.md
@@ -4,7 +4,7 @@ weight = 6
+++
ShardingSphere 默认情况下不提供对 `org.testcontainers.jdbc.ContainerDatabaseDriver` 的 `driverClassName` 的支持。
-要在 ShardingSphere 的配置文件为数据节点使用类似 `jdbc:tc:postgresql:17.1-bookworm://test-native-databases-postgres/demo_ds_0` 的 `jdbcUrl`,
+要在 ShardingSphere 的配置文件为数据节点使用类似 `jdbc:tc:postgresql:17.1-bookworm://test-databases-postgres/demo_ds_0` 的 `jdbcUrl`,
可能的 Maven 依赖关系如下,
```xml
@@ -28,7 +28,27 @@ ShardingSphere 默认情况下不提供对 `org.testcontainers.jdbc.ContainerDat
```
-`org.apache.shardingsphere:shardingsphere-infra-database-testcontainers` 为 testcontainers-java 分格的 jdbcURL 提供支持,
+要使用 `org.apache.shardingsphere:shardingsphere-infra-database-testcontainers` 模块,
+用户设备总是需要安装 Docker Engine 或符合 https://java.testcontainers.org/supported_docker_environment/ 要求的 alternative container runtimes。
+此时可在 ShardingSphere 的 YAML 配置文件正常使用 `jdbc:tc:postgresql:` 前缀的 jdbcURL。
+
+```yaml
+dataSources:
+ ds_0:
+ dataSourceClassName: com.zaxxer.hikari.HikariDataSource
+ driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver
+ jdbcUrl: jdbc:tc:postgresql:17.1-bookworm://test-databases-postgres/demo_ds_0
+ ds_1:
+ dataSourceClassName: com.zaxxer.hikari.HikariDataSource
+ driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver
+ jdbcUrl: jdbc:tc:postgresql:17.1-bookworm://test-databases-postgres/demo_ds_1
+ ds_2:
+ dataSourceClassName: com.zaxxer.hikari.HikariDataSource
+ driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver
+ jdbcUrl: jdbc:tc:postgresql:17.1-bookworm://test-databases-postgres/demo_ds_2
+```
+
+`org.apache.shardingsphere:shardingsphere-infra-database-testcontainers` 为 testcontainers-java 风格的 jdbcURL 提供支持,
包括但不限于,
1. 为 `jdbc:tc:clickhouse:` 的 jdbcURL 前缀提供支持的 Maven 模块 `org.testcontainers:clickhouse:1.20.3`
diff --git a/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/testcontainers/_index.en.md b/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/testcontainers/_index.en.md
index 0593e098d52eae..e2698a09e71384 100644
--- a/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/testcontainers/_index.en.md
+++ b/docs/document/content/user-manual/shardingsphere-jdbc/optional-plugins/testcontainers/_index.en.md
@@ -4,7 +4,7 @@ weight = 6
+++
ShardingSphere does not provide support for `driverClassName` of `org.testcontainers.jdbc.ContainerDatabaseDriver` by default.
-To use `jdbcUrl` like `jdbc:tc:postgresql:17.1-bookworm://test-native-databases-postgres/demo_ds_0` for data nodes in ShardingSphere's configuration file,
+To use `jdbcUrl` like `jdbc:tc:postgresql:17.1-bookworm://test-databases-postgres/demo_ds_0` for data nodes in ShardingSphere's configuration file,
the possible Maven dependencies are as follows,
```xml
@@ -28,7 +28,27 @@ the possible Maven dependencies are as follows,
```
-`org.apache.shardingsphere:shardingsphere-infra-database-testcontainers` provides support for jdbcURL in the testcontainers-java partition,
+At this time, you can use the jdbcURL with the prefix `jdbc:tc:postgresql:` normally in the YAML configuration file of ShardingSphere.
+
+```yaml
+dataSources:
+ ds_0:
+ dataSourceClassName: com.zaxxer.hikari.HikariDataSource
+ driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver
+ jdbcUrl: jdbc:tc:postgresql:17.1-bookworm://test-databases-postgres/demo_ds_0
+ ds_1:
+ dataSourceClassName: com.zaxxer.hikari.HikariDataSource
+ driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver
+ jdbcUrl: jdbc:tc:postgresql:17.1-bookworm://test-databases-postgres/demo_ds_1
+ ds_2:
+ dataSourceClassName: com.zaxxer.hikari.HikariDataSource
+ driverClassName: org.testcontainers.jdbc.ContainerDatabaseDriver
+ jdbcUrl: jdbc:tc:postgresql:17.1-bookworm://test-databases-postgres/demo_ds_2
+```
+
+To use the `org.apache.shardingsphere:shardingsphere-infra-database-testcontainers` module,
+the user machine always needs to have Docker Engine or alternative container runtimes that comply with https://java.testcontainers.org/supported_docker_environment/ installed.
+`org.apache.shardingsphere:shardingsphere-infra-database-testcontainers` provides support for testcontainers-java style jdbcURL,
including but not limited to,
1. Maven module `org.testcontainers:clickhouse:1.20.3` that provides support for jdbcURL prefixes for `jdbc:tc:clickhouse:`