Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

issue with hudi/trino-hudi-minio. HMS won't startup. Missing instructions #38

Closed
alberttwong opened this issue Feb 2, 2024 · 11 comments

Comments

@alberttwong
Copy link

alberttwong commented Feb 2, 2024

trino-hudi-minio.txt

ref: https://trino.io/episodes/41.html and https://www.youtube.com/watch?v=aL3PfMjvFM4&t=4671s

@alberttwong
Copy link
Author

switched to using older mariadb:10.6.9. Still cannot connect to HMS.

atwong@Albert-CelerData trino-hudi-minio % docker-compose up
[+] Running 12/12
 ✔ mariadb 11 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿]      0B/0B      Pulled                                                                                                                                                                         7.1s
   ✔ 7a9f619ee5e9 Pull complete                                                                                                                                                                                                   1.0s
   ✔ 1387e18688e2 Pull complete                                                                                                                                                                                                   0.6s
   ✔ 915da5088417 Pull complete                                                                                                                                                                                                   0.9s
   ✔ b49f9bf1bf52 Pull complete                                                                                                                                                                                                   1.6s
   ✔ c05c791accac Pull complete                                                                                                                                                                                                   1.4s
   ✔ 2a130f4ee234 Pull complete                                                                                                                                                                                                   1.8s
   ✔ 2990908d8220 Pull complete                                                                                                                                                                                                   2.0s
   ✔ a15059ebc811 Pull complete                                                                                                                                                                                                   2.2s
   ✔ 741678cc2b78 Pull complete                                                                                                                                                                                                   4.1s
   ✔ 8a252ae5930e Pull complete                                                                                                                                                                                                   2.6s
   ✔ 652f0cff833e Pull complete                                                                                                                                                                                                   2.7s
[+] Running 9/6
 ✔ Network trino-hudi-minio_trino-network                                                                                                                        Created                                                          0.0s
 ✔ Volume "trino-hudi-minio_minio-data"                                                                                                                          Created                                                          0.0s
 ✔ Container trino-hudi-minio-mariadb-1                                                                                                                          Created                                                          0.5s
 ✔ Container trino-hudi-minio-trino-coordinator-1                                                                                                                Created                                                          0.5s
 ✔ Container minio                                                                                                                                               Created                                                          0.5s
 ✔ Container trino-hudi-minio-hive-metastore-1                                                                                                                   Created                                                          0.0s
 ! hive-metastore The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested                                                                  0.0s
 ✔ Container trino-hudi-minio-spark-hudi-1                                                                                                                       Created                                                          0.0s
 ! spark-hudi The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested                                                                      0.0s
Attaching to minio, hive-metastore-1, mariadb-1, spark-hudi-1, trino-coordinator-1
mariadb-1            | 2024-02-02 16:59:15+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.6.9+maria~ubu2004 started.
trino-coordinator-1  | + launcher_opts=(--etc-dir /etc/trino)
trino-coordinator-1  | + grep -s -q node.id /etc/trino/node.properties
mariadb-1            | 2024-02-02 16:59:15+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
trino-coordinator-1  | + launcher_opts+=("-Dnode.id=${HOSTNAME}")
trino-coordinator-1  | + exec /usr/lib/trino/bin/launcher run --etc-dir /etc/trino -Dnode.id=trino-coordinator
mariadb-1            | 2024-02-02 16:59:15+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.6.9+maria~ubu2004 started.
trino-coordinator-1  | Unrecognized VM option 'UseBiasedLocking'
trino-coordinator-1  | Error: Could not create the Java Virtual Machine.
trino-coordinator-1  | Error: A fatal exception has occurred. Program will exit.
hive-metastore-1     | Waiting for database on mariadb to launch on 3306 ...
mariadb-1            | 2024-02-02 16:59:15+00:00 [Note] [Entrypoint]: Initializing database files
minio                | WARNING: MINIO_ACCESS_KEY and MINIO_SECRET_KEY are deprecated.
minio                |          Please use MINIO_ROOT_USER and MINIO_ROOT_PASSWORD
minio                | Formatting 1st pool, 1 set(s), 1 drives per set.
minio                | WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable.
minio                | MinIO Object Storage Server
minio                | Copyright: 2015-2023 MinIO, Inc.
minio                | License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
minio                | Version: RELEASE.2023-11-20T22-40-07Z (go1.21.4 linux/arm64)
minio                |
minio                | Status:         1 Online, 0 Offline.
minio                | S3-API: http://172.28.0.3:9000  http://127.0.0.1:9000
minio                | Console: http://172.28.0.3:9001 http://127.0.0.1:9001
minio                |
minio                | Documentation: https://min.io/docs/minio/linux/index.html
minio                | Warning: The standard parity is set to 0. This can lead to data loss.
minio                |
minio                |  You are running an older version of MinIO released 2 months before the latest release
minio                |  Update: Run `mc admin update`
minio                |
minio                |
trino-coordinator-1 exited with code 1
mariadb-1            |
mariadb-1            |
mariadb-1            | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !
mariadb-1            | To do so, start the server, then issue the following command:
mariadb-1            |
mariadb-1            | '/usr/bin/mysql_secure_installation'
mariadb-1            |
mariadb-1            | which will also give you the option of removing the test
mariadb-1            | databases and anonymous user created by default.  This is
mariadb-1            | strongly recommended for production servers.
mariadb-1            |
mariadb-1            | See the MariaDB Knowledgebase at https://mariadb.com/kb
mariadb-1            |
mariadb-1            | Please report any problems at https://mariadb.org/jira
mariadb-1            |
mariadb-1            | The latest information about MariaDB is available at https://mariadb.org/.
mariadb-1            |
mariadb-1            | Consider joining MariaDB's strong and vibrant community:
mariadb-1            | https://mariadb.org/get-involved/
mariadb-1            |
mariadb-1            | 2024-02-02 16:59:16+00:00 [Note] [Entrypoint]: Database files initialized
mariadb-1            | 2024-02-02 16:59:16+00:00 [Note] [Entrypoint]: Starting temporary server
mariadb-1            | 2024-02-02 16:59:16+00:00 [Note] [Entrypoint]: Waiting for server startup
mariadb-1            | 2024-02-02 16:59:16 0 [Note] mariadbd (server 10.6.9-MariaDB-1:10.6.9+maria~ubu2004) starting as process 106 ...
mariadb-1            | 2024-02-02 16:59:16 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
mariadb-1            | 2024-02-02 16:59:16 0 [Note] InnoDB: Number of pools: 1
mariadb-1            | 2024-02-02 16:59:16 0 [Note] InnoDB: Using ARMv8 crc32 + pmull instructions
mariadb-1            | 2024-02-02 16:59:16 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts)
mariadb-1            | 2024-02-02 16:59:16 0 [Note] InnoDB: Using Linux native AIO
mariadb-1            | 2024-02-02 16:59:16 0 [Note] InnoDB: Initializing buffer pool, total size = 134217728, chunk size = 134217728
mariadb-1            | 2024-02-02 16:59:16 0 [Note] InnoDB: Completed initialization of buffer pool
mariadb-1            | 2024-02-02 16:59:16 0 [Note] InnoDB: 128 rollback segments are active.
mariadb-1            | 2024-02-02 16:59:16 0 [Note] InnoDB: Creating shared tablespace for temporary tables
mariadb-1            | 2024-02-02 16:59:16 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
mariadb-1            | 2024-02-02 16:59:16 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
mariadb-1            | 2024-02-02 16:59:16 0 [Note] InnoDB: 10.6.9 started; log sequence number 41335; transaction id 14
mariadb-1            | 2024-02-02 16:59:16 0 [Note] Plugin 'FEEDBACK' is disabled.
mariadb-1            | 2024-02-02 16:59:17 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode.
mariadb-1            | 2024-02-02 16:59:17 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode.
mariadb-1            | 2024-02-02 16:59:17 0 [Note] mariadbd: ready for connections.
mariadb-1            | Version: '10.6.9-MariaDB-1:10.6.9+maria~ubu2004'  socket: '/run/mysqld/mysqld.sock'  port: 0  mariadb.org binary distribution
mariadb-1            | 2024-02-02 16:59:17+00:00 [Note] [Entrypoint]: Temporary server started.
spark-hudi-1         | WARNING: An illegal reflective access operation has occurred
spark-hudi-1         | WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/spark-3.2.1-bin-hadoop3.2/jars/spark-unsafe_2.12-3.2.1.jar) to constructor java.nio.DirectByteBuffer(long,int)
spark-hudi-1         | WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
spark-hudi-1         | WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
spark-hudi-1         | WARNING: All illegal access operations will be denied in a future release
mariadb-1            | 2024-02-02 16:59:18+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation)
mariadb-1            | 2024-02-02 16:59:19+00:00 [Note] [Entrypoint]: Creating database metastore_db
mariadb-1            | 2024-02-02 16:59:19+00:00 [Note] [Entrypoint]: Creating user admin
mariadb-1            | 2024-02-02 16:59:19+00:00 [Note] [Entrypoint]: Giving user admin access to schema metastore_db
mariadb-1            |
mariadb-1            | 2024-02-02 16:59:19+00:00 [Note] [Entrypoint]: Stopping temporary server
mariadb-1            | 2024-02-02 16:59:19 0 [Note] mariadbd (initiated by: root[root] @ localhost []): Normal shutdown
mariadb-1            | 2024-02-02 16:59:19 0 [Note] InnoDB: FTS optimize thread exiting.
spark-hudi-1         | 0    [main] INFO  org.apache.spark.sql.hive.thriftserver.HiveThriftServer2  - Started daemon with process name: 1@spark-hudi
spark-hudi-1         | 4    [main] INFO  org.apache.spark.util.SignalUtils  - Registering signal handler for TERM
spark-hudi-1         | 4    [main] INFO  org.apache.spark.util.SignalUtils  - Registering signal handler for HUP
spark-hudi-1         | 4    [main] INFO  org.apache.spark.util.SignalUtils  - Registering signal handler for INT
spark-hudi-1         | 10   [main] INFO  org.apache.spark.sql.hive.thriftserver.HiveThriftServer2  - Starting SparkContext
mariadb-1            | 2024-02-02 16:59:19 0 [Note] InnoDB: Starting shutdown...
mariadb-1            | 2024-02-02 16:59:19 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
mariadb-1            | 2024-02-02 16:59:19 0 [Note] InnoDB: Buffer pool(s) dump completed at 240202 16:59:19
spark-hudi-1         | 175  [main] INFO  org.apache.hadoop.hive.conf.HiveConf  - Found configuration file null
mariadb-1            | 2024-02-02 16:59:19 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1"
mariadb-1            | 2024-02-02 16:59:19 0 [Note] InnoDB: Shutdown completed; log sequence number 42309; transaction id 15
mariadb-1            | 2024-02-02 16:59:19 0 [Note] mariadbd: Shutdown complete
mariadb-1            |
spark-hudi-1         | 296  [main] INFO  org.apache.spark.SparkContext  - Running Spark version 3.2.1
spark-hudi-1         | 369  [main] WARN  org.apache.hadoop.util.NativeCodeLoader  - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
spark-hudi-1         | 540  [main] INFO  org.apache.spark.resource.ResourceUtils  - ==============================================================
spark-hudi-1         | 540  [main] INFO  org.apache.spark.resource.ResourceUtils  - No custom resources configured for spark.driver.
spark-hudi-1         | 541  [main] INFO  org.apache.spark.resource.ResourceUtils  - ==============================================================
spark-hudi-1         | 542  [main] INFO  org.apache.spark.SparkContext  - Submitted application: Thrift JDBC/ODBC Server
spark-hudi-1         | 565  [main] INFO  org.apache.spark.resource.ResourceProfile  - Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
spark-hudi-1         | 575  [main] INFO  org.apache.spark.resource.ResourceProfile  - Limiting resource is cpu
spark-hudi-1         | 576  [main] INFO  org.apache.spark.resource.ResourceProfileManager  - Added ResourceProfile id: 0
spark-hudi-1         | 631  [main] INFO  org.apache.spark.SecurityManager  - Changing view acls to: root
spark-hudi-1         | 632  [main] INFO  org.apache.spark.SecurityManager  - Changing modify acls to: root
spark-hudi-1         | 632  [main] INFO  org.apache.spark.SecurityManager  - Changing view acls groups to:
spark-hudi-1         | 632  [main] INFO  org.apache.spark.SecurityManager  - Changing modify acls groups to:
spark-hudi-1         | 633  [main] INFO  org.apache.spark.SecurityManager  - SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
mariadb-1            | 2024-02-02 16:59:20+00:00 [Note] [Entrypoint]: Temporary server stopped
mariadb-1            |
mariadb-1            | 2024-02-02 16:59:20+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up.
mariadb-1            |
spark-hudi-1         | 1000 [main] INFO  org.apache.spark.util.Utils  - Successfully started service 'sparkDriver' on port 42891.
mariadb-1            | 2024-02-02 16:59:20 0 [Note] mariadbd (server 10.6.9-MariaDB-1:10.6.9+maria~ubu2004) starting as process 1 ...
mariadb-1            | 2024-02-02 16:59:20 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
mariadb-1            | 2024-02-02 16:59:20 0 [Note] InnoDB: Number of pools: 1
mariadb-1            | 2024-02-02 16:59:20 0 [Note] InnoDB: Using ARMv8 crc32 + pmull instructions
mariadb-1            | 2024-02-02 16:59:20 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts)
spark-hudi-1         | 1036 [main] INFO  org.apache.spark.SparkEnv  - Registering MapOutputTracker
mariadb-1            | 2024-02-02 16:59:20 0 [Note] InnoDB: Using Linux native AIO
mariadb-1            | 2024-02-02 16:59:20 0 [Note] InnoDB: Initializing buffer pool, total size = 134217728, chunk size = 134217728
mariadb-1            | 2024-02-02 16:59:20 0 [Note] InnoDB: Completed initialization of buffer pool
mariadb-1            | 2024-02-02 16:59:20 0 [Note] InnoDB: 128 rollback segments are active.
mariadb-1            | 2024-02-02 16:59:20 0 [Note] InnoDB: Creating shared tablespace for temporary tables
mariadb-1            | 2024-02-02 16:59:20 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
mariadb-1            | 2024-02-02 16:59:20 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
mariadb-1            | 2024-02-02 16:59:20 0 [Note] InnoDB: 10.6.9 started; log sequence number 42309; transaction id 14
mariadb-1            | 2024-02-02 16:59:20 0 [Note] Plugin 'FEEDBACK' is disabled.
mariadb-1            | 2024-02-02 16:59:20 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
mariadb-1            | 2024-02-02 16:59:20 0 [Note] InnoDB: Buffer pool(s) load completed at 240202 16:59:20
mariadb-1            | 2024-02-02 16:59:20 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work.
mariadb-1            | 2024-02-02 16:59:20 0 [Note] Server socket created on IP: '0.0.0.0'.
mariadb-1            | 2024-02-02 16:59:20 0 [Note] Server socket created on IP: '::'.
spark-hudi-1         | 1075 [main] INFO  org.apache.spark.SparkEnv  - Registering BlockManagerMaster
mariadb-1            | 2024-02-02 16:59:20 0 [Note] mariadbd: ready for connections.
mariadb-1            | Version: '10.6.9-MariaDB-1:10.6.9+maria~ubu2004'  socket: '/run/mysqld/mysqld.sock'  port: 3306  mariadb.org binary distribution
spark-hudi-1         | 1092 [main] INFO  org.apache.spark.storage.BlockManagerMasterEndpoint  - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
spark-hudi-1         | 1093 [main] INFO  org.apache.spark.storage.BlockManagerMasterEndpoint  - BlockManagerMasterEndpoint up
spark-hudi-1         | 1097 [main] INFO  org.apache.spark.SparkEnv  - Registering BlockManagerMasterHeartbeat
spark-hudi-1         | 1123 [main] INFO  org.apache.spark.storage.DiskBlockManager  - Created local directory at /tmp/blockmgr-996d8eb1-2231-44e0-a287-7ab10a886e1f
spark-hudi-1         | 1147 [main] INFO  org.apache.spark.storage.memory.MemoryStore  - MemoryStore started with capacity 434.4 MiB
spark-hudi-1         | 1176 [main] INFO  org.apache.spark.SparkEnv  - Registering OutputCommitCoordinator
spark-hudi-1         | 1301 [main] INFO  org.sparkproject.jetty.util.log  - Logging initialized @3598ms to org.sparkproject.jetty.util.log.Slf4jLog
spark-hudi-1         | 1376 [main] INFO  org.sparkproject.jetty.server.Server  - jetty-9.4.43.v20210629; built: 2021-06-30T11:07:22.254Z; git: 526006ecfa3af7f1a27ef3a288e2bef7ea9dd7e8; jvm 11.0.16.1+1-LTS
spark-hudi-1         | 1401 [main] INFO  org.sparkproject.jetty.server.Server  - Started @3699ms
spark-hudi-1         | 1474 [main] INFO  org.sparkproject.jetty.server.AbstractConnector  - Started ServerConnector@262816a8{HTTP/1.1, (http/1.1)}{0.0.0.0:4040}
spark-hudi-1         | 1475 [main] INFO  org.apache.spark.util.Utils  - Successfully started service 'SparkUI' on port 4040.
spark-hudi-1         | 1506 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@a9f023e{/jobs,null,AVAILABLE,@Spark}
spark-hudi-1         | 1509 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@36681447{/jobs/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1510 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@726aa968{/jobs/job,null,AVAILABLE,@Spark}
spark-hudi-1         | 1513 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@328d044f{/jobs/job/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1514 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@4745e9c{/stages,null,AVAILABLE,@Spark}
spark-hudi-1         | 1515 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@75de29c0{/stages/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1516 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@296e281a{/stages/stage,null,AVAILABLE,@Spark}
spark-hudi-1         | 1518 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@6b350309{/stages/stage/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1518 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@588f63c{/stages/pool,null,AVAILABLE,@Spark}
spark-hudi-1         | 1519 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@1981d861{/stages/pool/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1519 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@53f4c1e6{/storage,null,AVAILABLE,@Spark}
spark-hudi-1         | 1522 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@6342d610{/storage/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1523 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@784abd3e{/storage/rdd,null,AVAILABLE,@Spark}
spark-hudi-1         | 1523 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@434514d8{/storage/rdd/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1525 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@4613311f{/environment,null,AVAILABLE,@Spark}
spark-hudi-1         | 1526 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@ec8f4b9{/environment/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1527 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@5484117b{/executors,null,AVAILABLE,@Spark}
spark-hudi-1         | 1528 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@7efb53af{/executors/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1529 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@3dfa819{/executors/threadDump,null,AVAILABLE,@Spark}
spark-hudi-1         | 1531 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@68ab0936{/executors/threadDump/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1545 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@42b84286{/static,null,AVAILABLE,@Spark}
spark-hudi-1         | 1546 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@350d3f4d{/,null,AVAILABLE,@Spark}
spark-hudi-1         | 1550 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@73844119{/api,null,AVAILABLE,@Spark}
spark-hudi-1         | 1551 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@411c6d44{/jobs/job/kill,null,AVAILABLE,@Spark}
spark-hudi-1         | 1552 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@748d2277{/stages/stage/kill,null,AVAILABLE,@Spark}
spark-hudi-1         | 1556 [main] INFO  org.apache.spark.ui.SparkUI  - Bound SparkUI to 0.0.0.0, and started at http://spark-hudi:4040
spark-hudi-1         | 1799 [main] INFO  org.apache.spark.executor.Executor  - Starting executor ID driver on host spark-hudi
spark-hudi-1         | 1843 [main] INFO  org.apache.spark.util.Utils  - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 42151.
spark-hudi-1         | 1843 [main] INFO  org.apache.spark.network.netty.NettyBlockTransferService  - Server created on spark-hudi:42151
spark-hudi-1         | 1848 [main] INFO  org.apache.spark.storage.BlockManager  - Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
spark-hudi-1         | 1856 [main] INFO  org.apache.spark.storage.BlockManagerMaster  - Registering BlockManager BlockManagerId(driver, spark-hudi, 42151, None)
spark-hudi-1         | 1861 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerMasterEndpoint  - Registering block manager spark-hudi:42151 with 434.4 MiB RAM, BlockManagerId(driver, spark-hudi, 42151, None)
spark-hudi-1         | 1874 [main] INFO  org.apache.spark.storage.BlockManagerMaster  - Registered BlockManager BlockManagerId(driver, spark-hudi, 42151, None)
spark-hudi-1         | 1875 [main] INFO  org.apache.spark.storage.BlockManager  - Initialized BlockManager: BlockManagerId(driver, spark-hudi, 42151, None)
mariadb-1            | 2024-02-02 16:59:21 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.28.0.5' (This connection closed normally without authentication)
hive-metastore-1     | Database on mariadb:3306 started
hive-metastore-1     | Init apache hive metastore on mariadb:3306
spark-hudi-1         | 2078 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@6870c3c2{/metrics/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 2402 [main] INFO  org.apache.spark.sql.internal.SharedState  - Setting hive.metastore.warehouse.dir ('hdfs://hadoop-master:9000/user/hive/warehouse') to the value of spark.sql.warehouse.dir.
spark-hudi-1         | 2487 [main] WARN  org.apache.hadoop.fs.FileSystem  - Failed to initialize fileystem hdfs://hadoop-master:9000/user/hive/warehouse: java.lang.IllegalArgumentException: java.net.UnknownHostException: hadoop-master
spark-hudi-1         | 2489 [main] WARN  org.apache.spark.sql.internal.SharedState  - Cannot qualify the warehouse path, leaving it unqualified.
spark-hudi-1         | java.lang.IllegalArgumentException: java.net.UnknownHostException: hadoop-master
spark-hudi-1         | 	at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:466)
spark-hudi-1         | 	at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:134)
spark-hudi-1         | 	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:374)
spark-hudi-1         | 	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:308)
spark-hudi-1         | 	at org.apache.hadoop.hdfs.DistributedFileSystem.initDFSClient(DistributedFileSystem.java:201)
spark-hudi-1         | 	at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:186)
spark-hudi-1         | 	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
spark-hudi-1         | 	at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
spark-hudi-1         | 	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
spark-hudi-1         | 	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
spark-hudi-1         | 	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
spark-hudi-1         | 	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
spark-hudi-1         | 	at org.apache.spark.sql.internal.SharedState$.qualifyWarehousePath(SharedState.scala:282)
spark-hudi-1         | 	at org.apache.spark.sql.internal.SharedState.liftedTree1$1(SharedState.scala:80)
spark-hudi-1         | 	at org.apache.spark.sql.internal.SharedState.<init>(SharedState.scala:79)
spark-hudi-1         | 	at org.apache.spark.sql.SparkSession.$anonfun$sharedState$1(SparkSession.scala:139)
spark-hudi-1         | 	at scala.Option.getOrElse(Option.scala:189)
spark-hudi-1         | 	at org.apache.spark.sql.SparkSession.sharedState$lzycompute(SparkSession.scala:139)
spark-hudi-1         | 	at org.apache.spark.sql.SparkSession.sharedState(SparkSession.scala:138)
spark-hudi-1         | 	at org.apache.spark.sql.SparkSession.$anonfun$sessionState$2(SparkSession.scala:158)
spark-hudi-1         | 	at scala.Option.getOrElse(Option.scala:189)
spark-hudi-1         | 	at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:156)
spark-hudi-1         | 	at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:153)
spark-hudi-1         | 	at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:62)
spark-hudi-1         | 	at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:96)
spark-hudi-1         | 	at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
spark-hudi-1         | 	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
spark-hudi-1         | 	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
spark-hudi-1         | Caused by: java.net.UnknownHostException: hadoop-master
spark-hudi-1         | 	... 38 more
spark-hudi-1         | 2510 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@33bb3f86{/SQL,null,AVAILABLE,@Spark}
spark-hudi-1         | 2511 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@6ac4c3f7{/SQL/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 2513 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@1150d471{/SQL/execution,null,AVAILABLE,@Spark}
spark-hudi-1         | 2516 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@654e6a90{/SQL/execution/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 2531 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@36ef1d65{/static/sql,null,AVAILABLE,@Spark}
hive-metastore-1     | SLF4J: Class path contains multiple SLF4J bindings.
hive-metastore-1     | SLF4J: Found binding in [jar:file:/opt/apache-hive-metastore-3.0.0-bin/lib/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
hive-metastore-1     | SLF4J: Found binding in [jar:file:/opt/hadoop-3.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
hive-metastore-1     | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
hive-metastore-1     | SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
spark-hudi-1         | 3399 [main] INFO  org.apache.spark.sql.hive.HiveUtils  - Initializing HiveMetastoreConnection version 2.3.9 using Spark classes.
spark-hudi-1         | 3728 [main] INFO  org.apache.spark.sql.hive.client.HiveClientImpl  - Warehouse location for Hive client (version 2.3.9) is hdfs://hadoop-master:9000/user/hive/warehouse
spark-hudi-1         | 3776 [main] INFO  hive.metastore  - Trying to connect to metastore with URI thrift://hive-metastore:9083
spark-hudi-1         | 3801 [main] WARN  hive.metastore  - Failed to connect to the MetaStore Server...
spark-hudi-1         | 3802 [main] INFO  hive.metastore  - Waiting 1 seconds before next connection attempt.
spark-hudi-1         | 4804 [main] INFO  hive.metastore  - Trying to connect to metastore with URI thrift://hive-metastore:9083
spark-hudi-1         | 4805 [main] WARN  hive.metastore  - Failed to connect to the MetaStore Server...
spark-hudi-1         | 4805 [main] INFO  hive.metastore  - Waiting 1 seconds before next connection attempt.
spark-hudi-1         | 5805 [main] INFO  hive.metastore  - Trying to connect to metastore with URI thrift://hive-metastore:9083
spark-hudi-1         | 5806 [main] WARN  hive.metastore  - Failed to connect to the MetaStore Server...
spark-hudi-1         | 5806 [main] INFO  hive.metastore  - Waiting 1 seconds before next connection attempt.
hive-metastore-1     | Metastore connection URL:	 jdbc:mysql://mariadb:3306/metastore_db
hive-metastore-1     | Metastore Connection Driver :	 com.mysql.cj.jdbc.Driver
hive-metastore-1     | Metastore connection User:	 admin
hive-metastore-1     | Starting metastore schema initialization to 3.0.0
hive-metastore-1     | Initialization script hive-schema-3.0.0.mysql.sql
hive-metastore-1     | 1/506        -- MySQL dump 10.13  Distrib 5.5.25, for osx10.6 (i386)
hive-metastore-1     | 2/506        --
hive-metastore-1     | 3/506        -- Host: localhost    Database: test
hive-metastore-1     | 4/506        -- ------------------------------------------------------
hive-metastore-1     | 5/506        -- Server version	5.5.25
hive-metastore-1     | 6/506
hive-metastore-1     | 7/506        /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
hive-metastore-1     | No rows affected (0.009 seconds)
hive-metastore-1     | 8/506        /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 9/506        /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 10/506       /*!40101 SET NAMES utf8 */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 11/506       /*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 12/506       /*!40103 SET TIME_ZONE='+00:00' */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 13/506       /*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 14/506       /*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 15/506       /*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 16/506       /*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 17/506
hive-metastore-1     | 18/506       --
hive-metastore-1     | 19/506       -- Table structure for table `BUCKETING_COLS`
hive-metastore-1     | 20/506       --
hive-metastore-1     | 21/506
hive-metastore-1     | 22/506       /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 23/506       /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 24/506       CREATE TABLE IF NOT EXISTS `BUCKETING_COLS` (
hive-metastore-1     | `SD_ID` bigint(20) NOT NULL,
hive-metastore-1     | `BUCKET_COL_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `INTEGER_IDX` int(11) NOT NULL,
hive-metastore-1     | PRIMARY KEY (`SD_ID`,`INTEGER_IDX`),
hive-metastore-1     | KEY `BUCKETING_COLS_N49` (`SD_ID`),
hive-metastore-1     | CONSTRAINT `BUCKETING_COLS_FK1` FOREIGN KEY (`SD_ID`) REFERENCES `SDS` (`SD_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.005 seconds)
hive-metastore-1     | 25/506       /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 26/506
hive-metastore-1     | 27/506       --
hive-metastore-1     | 28/506       -- Table structure for table `CDS`
hive-metastore-1     | 29/506       --
hive-metastore-1     | 30/506
hive-metastore-1     | 31/506       /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 32/506       /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 33/506       CREATE TABLE IF NOT EXISTS `CDS` (
hive-metastore-1     | `CD_ID` bigint(20) NOT NULL,
hive-metastore-1     | PRIMARY KEY (`CD_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 34/506       /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 35/506
hive-metastore-1     | 36/506       --
hive-metastore-1     | 37/506       -- Table structure for table `COLUMNS_V2`
hive-metastore-1     | 38/506       --
hive-metastore-1     | 39/506
hive-metastore-1     | 40/506       /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 41/506       /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 42/506       CREATE TABLE IF NOT EXISTS `COLUMNS_V2` (
hive-metastore-1     | `CD_ID` bigint(20) NOT NULL,
hive-metastore-1     | `COMMENT` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `COLUMN_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `TYPE_NAME` MEDIUMTEXT DEFAULT NULL,
hive-metastore-1     | `INTEGER_IDX` int(11) NOT NULL,
hive-metastore-1     | PRIMARY KEY (`CD_ID`,`COLUMN_NAME`),
hive-metastore-1     | KEY `COLUMNS_V2_N49` (`CD_ID`),
hive-metastore-1     | CONSTRAINT `COLUMNS_V2_FK1` FOREIGN KEY (`CD_ID`) REFERENCES `CDS` (`CD_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 43/506       /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 44/506
hive-metastore-1     | 45/506       --
hive-metastore-1     | 46/506       -- Table structure for table `DATABASE_PARAMS`
hive-metastore-1     | 47/506       --
hive-metastore-1     | 48/506
hive-metastore-1     | 49/506       /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 50/506       /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 51/506       CREATE TABLE IF NOT EXISTS `DATABASE_PARAMS` (
hive-metastore-1     | `DB_ID` bigint(20) NOT NULL,
hive-metastore-1     | `PARAM_KEY` varchar(180) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `PARAM_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`DB_ID`,`PARAM_KEY`),
hive-metastore-1     | KEY `DATABASE_PARAMS_N49` (`DB_ID`),
hive-metastore-1     | CONSTRAINT `DATABASE_PARAMS_FK1` FOREIGN KEY (`DB_ID`) REFERENCES `DBS` (`DB_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 52/506       /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 53/506
hive-metastore-1     | 54/506       CREATE TABLE `CTLGS` (
hive-metastore-1     | `CTLG_ID` BIGINT PRIMARY KEY,
hive-metastore-1     | `NAME` VARCHAR(256),
hive-metastore-1     | `DESC` VARCHAR(4000),
hive-metastore-1     | `LOCATION_URI` VARCHAR(4000) NOT NULL,
hive-metastore-1     | UNIQUE KEY `UNIQUE_CATALOG` (`NAME`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 55/506
hive-metastore-1     | 56/506
hive-metastore-1     | 57/506       --
hive-metastore-1     | 58/506       -- Table structure for table `DBS`
hive-metastore-1     | 59/506       --
hive-metastore-1     | 60/506
hive-metastore-1     | 61/506       /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 62/506       /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 63/506       CREATE TABLE IF NOT EXISTS `DBS` (
hive-metastore-1     | `DB_ID` bigint(20) NOT NULL,
hive-metastore-1     | `DESC` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `DB_LOCATION_URI` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `OWNER_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `OWNER_TYPE` varchar(10) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `CTLG_NAME` varchar(256) NOT NULL,
hive-metastore-1     | PRIMARY KEY (`DB_ID`),
hive-metastore-1     | UNIQUE KEY `UNIQUE_DATABASE` (`NAME`, `CTLG_NAME`),
hive-metastore-1     | CONSTRAINT `CTLG_FK1` FOREIGN KEY (`CTLG_NAME`) REFERENCES `CTLGS` (`NAME`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 64/506       /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 65/506
hive-metastore-1     | 66/506       --
hive-metastore-1     | 67/506       -- Table structure for table `DB_PRIVS`
hive-metastore-1     | 68/506       --
hive-metastore-1     | 69/506
hive-metastore-1     | 70/506       /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 71/506       /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 72/506       CREATE TABLE IF NOT EXISTS `DB_PRIVS` (
hive-metastore-1     | `DB_GRANT_ID` bigint(20) NOT NULL,
hive-metastore-1     | `CREATE_TIME` int(11) NOT NULL,
hive-metastore-1     | `DB_ID` bigint(20) DEFAULT NULL,
hive-metastore-1     | `GRANT_OPTION` smallint(6) NOT NULL,
hive-metastore-1     | `GRANTOR` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `GRANTOR_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `PRINCIPAL_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `PRINCIPAL_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `DB_PRIV` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`DB_GRANT_ID`),
hive-metastore-1     | UNIQUE KEY `DBPRIVILEGEINDEX` (`DB_ID`,`PRINCIPAL_NAME`,`PRINCIPAL_TYPE`,`DB_PRIV`,`GRANTOR`,`GRANTOR_TYPE`),
hive-metastore-1     | KEY `DB_PRIVS_N49` (`DB_ID`),
hive-metastore-1     | CONSTRAINT `DB_PRIVS_FK1` FOREIGN KEY (`DB_ID`) REFERENCES `DBS` (`DB_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 73/506       /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 74/506
hive-metastore-1     | 75/506       --
hive-metastore-1     | 76/506       -- Table structure for table `GLOBAL_PRIVS`
hive-metastore-1     | 77/506       --
hive-metastore-1     | 78/506
hive-metastore-1     | 79/506       /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 80/506       /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 81/506       CREATE TABLE IF NOT EXISTS `GLOBAL_PRIVS` (
hive-metastore-1     | `USER_GRANT_ID` bigint(20) NOT NULL,
hive-metastore-1     | `CREATE_TIME` int(11) NOT NULL,
hive-metastore-1     | `GRANT_OPTION` smallint(6) NOT NULL,
hive-metastore-1     | `GRANTOR` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `GRANTOR_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `PRINCIPAL_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `PRINCIPAL_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `USER_PRIV` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`USER_GRANT_ID`),
hive-metastore-1     | UNIQUE KEY `GLOBALPRIVILEGEINDEX` (`PRINCIPAL_NAME`,`PRINCIPAL_TYPE`,`USER_PRIV`,`GRANTOR`,`GRANTOR_TYPE`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 82/506       /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 83/506
hive-metastore-1     | 84/506       --
hive-metastore-1     | 85/506       -- Table structure for table `IDXS`
hive-metastore-1     | 86/506       --
hive-metastore-1     | 87/506
hive-metastore-1     | 88/506       /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 89/506       /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 90/506       CREATE TABLE IF NOT EXISTS `IDXS` (
hive-metastore-1     | `INDEX_ID` bigint(20) NOT NULL,
hive-metastore-1     | `CREATE_TIME` int(11) NOT NULL,
hive-metastore-1     | `DEFERRED_REBUILD` bit(1) NOT NULL,
hive-metastore-1     | `INDEX_HANDLER_CLASS` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `INDEX_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `INDEX_TBL_ID` bigint(20) DEFAULT NULL,
hive-metastore-1     | `LAST_ACCESS_TIME` int(11) NOT NULL,
hive-metastore-1     | `ORIG_TBL_ID` bigint(20) DEFAULT NULL,
hive-metastore-1     | `SD_ID` bigint(20) DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`INDEX_ID`),
hive-metastore-1     | UNIQUE KEY `UNIQUEINDEX` (`INDEX_NAME`,`ORIG_TBL_ID`),
hive-metastore-1     | KEY `IDXS_N51` (`SD_ID`),
hive-metastore-1     | KEY `IDXS_N50` (`INDEX_TBL_ID`),
hive-metastore-1     | KEY `IDXS_N49` (`ORIG_TBL_ID`),
hive-metastore-1     | CONSTRAINT `IDXS_FK1` FOREIGN KEY (`ORIG_TBL_ID`) REFERENCES `TBLS` (`TBL_ID`),
hive-metastore-1     | CONSTRAINT `IDXS_FK2` FOREIGN KEY (`SD_ID`) REFERENCES `SDS` (`SD_ID`),
hive-metastore-1     | CONSTRAINT `IDXS_FK3` FOREIGN KEY (`INDEX_TBL_ID`) REFERENCES `TBLS` (`TBL_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.004 seconds)
hive-metastore-1     | 91/506       /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 92/506
hive-metastore-1     | 93/506       --
hive-metastore-1     | 94/506       -- Table structure for table `INDEX_PARAMS`
hive-metastore-1     | 95/506       --
hive-metastore-1     | 96/506
hive-metastore-1     | 97/506       /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 98/506       /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 99/506       CREATE TABLE IF NOT EXISTS `INDEX_PARAMS` (
hive-metastore-1     | `INDEX_ID` bigint(20) NOT NULL,
hive-metastore-1     | `PARAM_KEY` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `PARAM_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`INDEX_ID`,`PARAM_KEY`),
hive-metastore-1     | KEY `INDEX_PARAMS_N49` (`INDEX_ID`),
hive-metastore-1     | CONSTRAINT `INDEX_PARAMS_FK1` FOREIGN KEY (`INDEX_ID`) REFERENCES `IDXS` (`INDEX_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 100/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 101/506
hive-metastore-1     | 102/506      --
hive-metastore-1     | 103/506      -- Table structure for table `NUCLEUS_TABLES`
hive-metastore-1     | 104/506      --
hive-metastore-1     | 105/506
hive-metastore-1     | 106/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 107/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 108/506      CREATE TABLE IF NOT EXISTS `NUCLEUS_TABLES` (
hive-metastore-1     | `CLASS_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `TYPE` varchar(4) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `OWNER` varchar(2) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `VERSION` varchar(20) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `INTERFACE_NAME` varchar(255) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`CLASS_NAME`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 109/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 110/506
hive-metastore-1     | 111/506      --
hive-metastore-1     | 112/506      -- Table structure for table `PARTITIONS`
hive-metastore-1     | 113/506      --
hive-metastore-1     | 114/506
hive-metastore-1     | 115/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 116/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 117/506      CREATE TABLE IF NOT EXISTS `PARTITIONS` (
hive-metastore-1     | `PART_ID` bigint(20) NOT NULL,
hive-metastore-1     | `CREATE_TIME` int(11) NOT NULL,
hive-metastore-1     | `LAST_ACCESS_TIME` int(11) NOT NULL,
hive-metastore-1     | `PART_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `SD_ID` bigint(20) DEFAULT NULL,
hive-metastore-1     | `TBL_ID` bigint(20) DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`PART_ID`),
hive-metastore-1     | UNIQUE KEY `UNIQUEPARTITION` (`PART_NAME`,`TBL_ID`),
hive-metastore-1     | KEY `PARTITIONS_N49` (`TBL_ID`),
hive-metastore-1     | KEY `PARTITIONS_N50` (`SD_ID`),
hive-metastore-1     | CONSTRAINT `PARTITIONS_FK1` FOREIGN KEY (`TBL_ID`) REFERENCES `TBLS` (`TBL_ID`),
hive-metastore-1     | CONSTRAINT `PARTITIONS_FK2` FOREIGN KEY (`SD_ID`) REFERENCES `SDS` (`SD_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.005 seconds)
hive-metastore-1     | 118/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 119/506
hive-metastore-1     | 120/506      --
hive-metastore-1     | 121/506      -- Table structure for table `PARTITION_EVENTS`
hive-metastore-1     | 122/506      --
hive-metastore-1     | 123/506
hive-metastore-1     | 124/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 125/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 126/506      CREATE TABLE IF NOT EXISTS `PARTITION_EVENTS` (
hive-metastore-1     | `PART_NAME_ID` bigint(20) NOT NULL,
hive-metastore-1     | `CAT_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `EVENT_TIME` bigint(20) NOT NULL,
hive-metastore-1     | `EVENT_TYPE` int(11) NOT NULL,
hive-metastore-1     | `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `TBL_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`PART_NAME_ID`),
hive-metastore-1     | KEY `PARTITIONEVENTINDEX` (`PARTITION_NAME`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.007 seconds)
hive-metastore-1     | 127/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 128/506
hive-metastore-1     | 129/506      --
hive-metastore-1     | 130/506      -- Table structure for table `PARTITION_KEYS`
hive-metastore-1     | 131/506      --
hive-metastore-1     | 132/506
hive-metastore-1     | 133/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 134/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 135/506      CREATE TABLE IF NOT EXISTS `PARTITION_KEYS` (
hive-metastore-1     | `TBL_ID` bigint(20) NOT NULL,
hive-metastore-1     | `PKEY_COMMENT` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `PKEY_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `PKEY_TYPE` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `INTEGER_IDX` int(11) NOT NULL,
hive-metastore-1     | PRIMARY KEY (`TBL_ID`,`PKEY_NAME`),
hive-metastore-1     | KEY `PARTITION_KEYS_N49` (`TBL_ID`),
hive-metastore-1     | CONSTRAINT `PARTITION_KEYS_FK1` FOREIGN KEY (`TBL_ID`) REFERENCES `TBLS` (`TBL_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 136/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 137/506
hive-metastore-1     | 138/506      --
hive-metastore-1     | 139/506      -- Table structure for table `PARTITION_KEY_VALS`
hive-metastore-1     | 140/506      --
hive-metastore-1     | 141/506
hive-metastore-1     | 142/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 143/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 144/506      CREATE TABLE IF NOT EXISTS `PARTITION_KEY_VALS` (
hive-metastore-1     | `PART_ID` bigint(20) NOT NULL,
hive-metastore-1     | `PART_KEY_VAL` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `INTEGER_IDX` int(11) NOT NULL,
hive-metastore-1     | PRIMARY KEY (`PART_ID`,`INTEGER_IDX`),
hive-metastore-1     | KEY `PARTITION_KEY_VALS_N49` (`PART_ID`),
hive-metastore-1     | CONSTRAINT `PARTITION_KEY_VALS_FK1` FOREIGN KEY (`PART_ID`) REFERENCES `PARTITIONS` (`PART_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.004 seconds)
hive-metastore-1     | 145/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 146/506
hive-metastore-1     | 147/506      --
hive-metastore-1     | 148/506      -- Table structure for table `PARTITION_PARAMS`
hive-metastore-1     | 149/506      --
hive-metastore-1     | 150/506
hive-metastore-1     | 151/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 152/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 153/506      CREATE TABLE IF NOT EXISTS `PARTITION_PARAMS` (
hive-metastore-1     | `PART_ID` bigint(20) NOT NULL,
hive-metastore-1     | `PARAM_KEY` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `PARAM_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`PART_ID`,`PARAM_KEY`),
hive-metastore-1     | KEY `PARTITION_PARAMS_N49` (`PART_ID`),
hive-metastore-1     | CONSTRAINT `PARTITION_PARAMS_FK1` FOREIGN KEY (`PART_ID`) REFERENCES `PARTITIONS` (`PART_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 154/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 155/506
hive-metastore-1     | 156/506      --
hive-metastore-1     | 157/506      -- Table structure for table `PART_COL_PRIVS`
hive-metastore-1     | 158/506      --
hive-metastore-1     | 159/506
hive-metastore-1     | 160/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 161/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 162/506      CREATE TABLE IF NOT EXISTS `PART_COL_PRIVS` (
hive-metastore-1     | `PART_COLUMN_GRANT_ID` bigint(20) NOT NULL,
hive-metastore-1     | `COLUMN_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `CREATE_TIME` int(11) NOT NULL,
hive-metastore-1     | `GRANT_OPTION` smallint(6) NOT NULL,
hive-metastore-1     | `GRANTOR` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `GRANTOR_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `PART_ID` bigint(20) DEFAULT NULL,
hive-metastore-1     | `PRINCIPAL_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `PRINCIPAL_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `PART_COL_PRIV` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`PART_COLUMN_GRANT_ID`),
hive-metastore-1     | KEY `PART_COL_PRIVS_N49` (`PART_ID`),
hive-metastore-1     | KEY `PARTITIONCOLUMNPRIVILEGEINDEX` (`PART_ID`,`COLUMN_NAME`,`PRINCIPAL_NAME`,`PRINCIPAL_TYPE`,`PART_COL_PRIV`,`GRANTOR`,`GRANTOR_TYPE`),
hive-metastore-1     | CONSTRAINT `PART_COL_PRIVS_FK1` FOREIGN KEY (`PART_ID`) REFERENCES `PARTITIONS` (`PART_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 163/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 164/506
hive-metastore-1     | 165/506      --
hive-metastore-1     | 166/506      -- Table structure for table `PART_PRIVS`
hive-metastore-1     | 167/506      --
hive-metastore-1     | 168/506
hive-metastore-1     | 169/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 170/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 171/506      CREATE TABLE IF NOT EXISTS `PART_PRIVS` (
hive-metastore-1     | `PART_GRANT_ID` bigint(20) NOT NULL,
hive-metastore-1     | `CREATE_TIME` int(11) NOT NULL,
hive-metastore-1     | `GRANT_OPTION` smallint(6) NOT NULL,
hive-metastore-1     | `GRANTOR` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `GRANTOR_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `PART_ID` bigint(20) DEFAULT NULL,
hive-metastore-1     | `PRINCIPAL_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `PRINCIPAL_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `PART_PRIV` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`PART_GRANT_ID`),
hive-metastore-1     | KEY `PARTPRIVILEGEINDEX` (`PART_ID`,`PRINCIPAL_NAME`,`PRINCIPAL_TYPE`,`PART_PRIV`,`GRANTOR`,`GRANTOR_TYPE`),
hive-metastore-1     | KEY `PART_PRIVS_N49` (`PART_ID`),
hive-metastore-1     | CONSTRAINT `PART_PRIVS_FK1` FOREIGN KEY (`PART_ID`) REFERENCES `PARTITIONS` (`PART_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 172/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 173/506
hive-metastore-1     | 174/506      --
hive-metastore-1     | 175/506      -- Table structure for table `ROLES`
hive-metastore-1     | 176/506      --
hive-metastore-1     | 177/506
hive-metastore-1     | 178/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 179/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 180/506      CREATE TABLE IF NOT EXISTS `ROLES` (
hive-metastore-1     | `ROLE_ID` bigint(20) NOT NULL,
hive-metastore-1     | `CREATE_TIME` int(11) NOT NULL,
hive-metastore-1     | `OWNER_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `ROLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`ROLE_ID`),
hive-metastore-1     | UNIQUE KEY `ROLEENTITYINDEX` (`ROLE_NAME`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 181/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 182/506
hive-metastore-1     | 183/506      --
hive-metastore-1     | 184/506      -- Table structure for table `ROLE_MAP`
hive-metastore-1     | 185/506      --
hive-metastore-1     | 186/506
hive-metastore-1     | 187/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 188/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 189/506      CREATE TABLE IF NOT EXISTS `ROLE_MAP` (
hive-metastore-1     | `ROLE_GRANT_ID` bigint(20) NOT NULL,
hive-metastore-1     | `ADD_TIME` int(11) NOT NULL,
hive-metastore-1     | `GRANT_OPTION` smallint(6) NOT NULL,
hive-metastore-1     | `GRANTOR` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `GRANTOR_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `PRINCIPAL_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `PRINCIPAL_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `ROLE_ID` bigint(20) DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`ROLE_GRANT_ID`),
hive-metastore-1     | UNIQUE KEY `USERROLEMAPINDEX` (`PRINCIPAL_NAME`,`ROLE_ID`,`GRANTOR`,`GRANTOR_TYPE`),
hive-metastore-1     | KEY `ROLE_MAP_N49` (`ROLE_ID`),
hive-metastore-1     | CONSTRAINT `ROLE_MAP_FK1` FOREIGN KEY (`ROLE_ID`) REFERENCES `ROLES` (`ROLE_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 190/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 191/506
hive-metastore-1     | 192/506      --
hive-metastore-1     | 193/506      -- Table structure for table `SDS`
hive-metastore-1     | 194/506      --
hive-metastore-1     | 195/506
hive-metastore-1     | 196/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 197/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 198/506      CREATE TABLE IF NOT EXISTS `SDS` (
hive-metastore-1     | `SD_ID` bigint(20) NOT NULL,
hive-metastore-1     | `CD_ID` bigint(20) DEFAULT NULL,
hive-metastore-1     | `INPUT_FORMAT` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `IS_COMPRESSED` bit(1) NOT NULL,
hive-metastore-1     | `IS_STOREDASSUBDIRECTORIES` bit(1) NOT NULL,
hive-metastore-1     | `LOCATION` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `NUM_BUCKETS` int(11) NOT NULL,
hive-metastore-1     | `OUTPUT_FORMAT` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `SERDE_ID` bigint(20) DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`SD_ID`),
hive-metastore-1     | KEY `SDS_N49` (`SERDE_ID`),
hive-metastore-1     | KEY `SDS_N50` (`CD_ID`),
hive-metastore-1     | CONSTRAINT `SDS_FK1` FOREIGN KEY (`SERDE_ID`) REFERENCES `SERDES` (`SERDE_ID`),
hive-metastore-1     | CONSTRAINT `SDS_FK2` FOREIGN KEY (`CD_ID`) REFERENCES `CDS` (`CD_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.004 seconds)
hive-metastore-1     | 199/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 200/506
hive-metastore-1     | 201/506      --
hive-metastore-1     | 202/506      -- Table structure for table `SD_PARAMS`
hive-metastore-1     | 203/506      --
hive-metastore-1     | 204/506
hive-metastore-1     | 205/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 206/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 207/506      CREATE TABLE IF NOT EXISTS `SD_PARAMS` (
hive-metastore-1     | `SD_ID` bigint(20) NOT NULL,
hive-metastore-1     | `PARAM_KEY` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `PARAM_VALUE` MEDIUMTEXT CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`SD_ID`,`PARAM_KEY`),
hive-metastore-1     | KEY `SD_PARAMS_N49` (`SD_ID`),
hive-metastore-1     | CONSTRAINT `SD_PARAMS_FK1` FOREIGN KEY (`SD_ID`) REFERENCES `SDS` (`SD_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 208/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 209/506
hive-metastore-1     | 210/506      --
hive-metastore-1     | 211/506      -- Table structure for table `SEQUENCE_TABLE`
hive-metastore-1     | 212/506      --
hive-metastore-1     | 213/506
hive-metastore-1     | 214/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 215/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 216/506      CREATE TABLE IF NOT EXISTS `SEQUENCE_TABLE` (
hive-metastore-1     | `SEQUENCE_NAME` varchar(255) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `NEXT_VAL` bigint(20) NOT NULL,
hive-metastore-1     | PRIMARY KEY (`SEQUENCE_NAME`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 217/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 218/506
hive-metastore-1     | 219/506      INSERT INTO `SEQUENCE_TABLE` (`SEQUENCE_NAME`, `NEXT_VAL`) VALUES ('org.apache.hadoop.hive.metastore.model.MNotificationLog', 1);
hive-metastore-1     | 1 row affected (0 seconds)
hive-metastore-1     | 220/506
hive-metastore-1     | 221/506      --
hive-metastore-1     | 222/506      -- Table structure for table `SERDES`
hive-metastore-1     | 223/506      --
hive-metastore-1     | 224/506
hive-metastore-1     | 225/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 226/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 227/506      CREATE TABLE IF NOT EXISTS `SERDES` (
hive-metastore-1     | `SERDE_ID` bigint(20) NOT NULL,
hive-metastore-1     | `NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `SLIB` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `DESCRIPTION` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `SERIALIZER_CLASS` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `DESERIALIZER_CLASS` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `SERDE_TYPE` integer,
hive-metastore-1     | PRIMARY KEY (`SERDE_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 228/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 229/506
hive-metastore-1     | 230/506      --
hive-metastore-1     | 231/506      -- Table structure for table `SERDE_PARAMS`
hive-metastore-1     | 232/506      --
hive-metastore-1     | 233/506
hive-metastore-1     | 234/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 235/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 236/506      CREATE TABLE IF NOT EXISTS `SERDE_PARAMS` (
hive-metastore-1     | `SERDE_ID` bigint(20) NOT NULL,
hive-metastore-1     | `PARAM_KEY` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `PARAM_VALUE` MEDIUMTEXT CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`SERDE_ID`,`PARAM_KEY`),
hive-metastore-1     | KEY `SERDE_PARAMS_N49` (`SERDE_ID`),
hive-metastore-1     | CONSTRAINT `SERDE_PARAMS_FK1` FOREIGN KEY (`SERDE_ID`) REFERENCES `SERDES` (`SERDE_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.007 seconds)
hive-metastore-1     | 237/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 238/506
hive-metastore-1     | 239/506      --
hive-metastore-1     | 240/506      -- Table structure for table `SKEWED_COL_NAMES`
hive-metastore-1     | 241/506      --
hive-metastore-1     | 242/506
hive-metastore-1     | 243/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 244/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 245/506      CREATE TABLE IF NOT EXISTS `SKEWED_COL_NAMES` (
hive-metastore-1     | `SD_ID` bigint(20) NOT NULL,
hive-metastore-1     | `SKEWED_COL_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `INTEGER_IDX` int(11) NOT NULL,
hive-metastore-1     | PRIMARY KEY (`SD_ID`,`INTEGER_IDX`),
hive-metastore-1     | KEY `SKEWED_COL_NAMES_N49` (`SD_ID`),
hive-metastore-1     | CONSTRAINT `SKEWED_COL_NAMES_FK1` FOREIGN KEY (`SD_ID`) REFERENCES `SDS` (`SD_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 246/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 247/506
hive-metastore-1     | 248/506      --
hive-metastore-1     | 249/506      -- Table structure for table `SKEWED_COL_VALUE_LOC_MAP`
hive-metastore-1     | 250/506      --
hive-metastore-1     | 251/506
hive-metastore-1     | 252/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 253/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 254/506      CREATE TABLE IF NOT EXISTS `SKEWED_COL_VALUE_LOC_MAP` (
hive-metastore-1     | `SD_ID` bigint(20) NOT NULL,
hive-metastore-1     | `STRING_LIST_ID_KID` bigint(20) NOT NULL,
hive-metastore-1     | `LOCATION` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`SD_ID`,`STRING_LIST_ID_KID`),
hive-metastore-1     | KEY `SKEWED_COL_VALUE_LOC_MAP_N49` (`STRING_LIST_ID_KID`),
hive-metastore-1     | KEY `SKEWED_COL_VALUE_LOC_MAP_N50` (`SD_ID`),
hive-metastore-1     | CONSTRAINT `SKEWED_COL_VALUE_LOC_MAP_FK2` FOREIGN KEY (`STRING_LIST_ID_KID`) REFERENCES `SKEWED_STRING_LIST` (`STRING_LIST_ID`),
hive-metastore-1     | CONSTRAINT `SKEWED_COL_VALUE_LOC_MAP_FK1` FOREIGN KEY (`SD_ID`) REFERENCES `SDS` (`SD_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 255/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 256/506
hive-metastore-1     | 257/506      --
hive-metastore-1     | 258/506      -- Table structure for table `SKEWED_STRING_LIST`
hive-metastore-1     | 259/506      --
hive-metastore-1     | 260/506
hive-metastore-1     | 261/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 262/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 263/506      CREATE TABLE IF NOT EXISTS `SKEWED_STRING_LIST` (
hive-metastore-1     | `STRING_LIST_ID` bigint(20) NOT NULL,
hive-metastore-1     | PRIMARY KEY (`STRING_LIST_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 264/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 265/506
hive-metastore-1     | 266/506      --
hive-metastore-1     | 267/506      -- Table structure for table `SKEWED_STRING_LIST_VALUES`
hive-metastore-1     | 268/506      --
hive-metastore-1     | 269/506
hive-metastore-1     | 270/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 271/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 272/506      CREATE TABLE IF NOT EXISTS `SKEWED_STRING_LIST_VALUES` (
hive-metastore-1     | `STRING_LIST_ID` bigint(20) NOT NULL,
hive-metastore-1     | `STRING_LIST_VALUE` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `INTEGER_IDX` int(11) NOT NULL,
hive-metastore-1     | PRIMARY KEY (`STRING_LIST_ID`,`INTEGER_IDX`),
hive-metastore-1     | KEY `SKEWED_STRING_LIST_VALUES_N49` (`STRING_LIST_ID`),
hive-metastore-1     | CONSTRAINT `SKEWED_STRING_LIST_VALUES_FK1` FOREIGN KEY (`STRING_LIST_ID`) REFERENCES `SKEWED_STRING_LIST` (`STRING_LIST_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 273/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 274/506
hive-metastore-1     | 275/506      --
hive-metastore-1     | 276/506      -- Table structure for table `SKEWED_VALUES`
hive-metastore-1     | 277/506      --
hive-metastore-1     | 278/506
hive-metastore-1     | 279/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 280/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 281/506      CREATE TABLE IF NOT EXISTS `SKEWED_VALUES` (
hive-metastore-1     | `SD_ID_OID` bigint(20) NOT NULL,
hive-metastore-1     | `STRING_LIST_ID_EID` bigint(20) NOT NULL,
hive-metastore-1     | `INTEGER_IDX` int(11) NOT NULL,
hive-metastore-1     | PRIMARY KEY (`SD_ID_OID`,`INTEGER_IDX`),
hive-metastore-1     | KEY `SKEWED_VALUES_N50` (`SD_ID_OID`),
hive-metastore-1     | KEY `SKEWED_VALUES_N49` (`STRING_LIST_ID_EID`),
hive-metastore-1     | CONSTRAINT `SKEWED_VALUES_FK2` FOREIGN KEY (`STRING_LIST_ID_EID`) REFERENCES `SKEWED_STRING_LIST` (`STRING_LIST_ID`),
hive-metastore-1     | CONSTRAINT `SKEWED_VALUES_FK1` FOREIGN KEY (`SD_ID_OID`) REFERENCES `SDS` (`SD_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.007 seconds)
hive-metastore-1     | 282/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 283/506
hive-metastore-1     | 284/506      --
hive-metastore-1     | 285/506      -- Table structure for table `SORT_COLS`
hive-metastore-1     | 286/506      --
hive-metastore-1     | 287/506
hive-metastore-1     | 288/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 289/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 290/506      CREATE TABLE IF NOT EXISTS `SORT_COLS` (
hive-metastore-1     | `SD_ID` bigint(20) NOT NULL,
hive-metastore-1     | `COLUMN_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `ORDER` int(11) NOT NULL,
hive-metastore-1     | `INTEGER_IDX` int(11) NOT NULL,
hive-metastore-1     | PRIMARY KEY (`SD_ID`,`INTEGER_IDX`),
hive-metastore-1     | KEY `SORT_COLS_N49` (`SD_ID`),
hive-metastore-1     | CONSTRAINT `SORT_COLS_FK1` FOREIGN KEY (`SD_ID`) REFERENCES `SDS` (`SD_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.004 seconds)
hive-metastore-1     | 291/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 292/506
hive-metastore-1     | 293/506      --
hive-metastore-1     | 294/506      -- Table structure for table `TABLE_PARAMS`
hive-metastore-1     | 295/506      --
hive-metastore-1     | 296/506
hive-metastore-1     | 297/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 298/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 299/506      CREATE TABLE IF NOT EXISTS `TABLE_PARAMS` (
hive-metastore-1     | `TBL_ID` bigint(20) NOT NULL,
hive-metastore-1     | `PARAM_KEY` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `PARAM_VALUE` MEDIUMTEXT CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`TBL_ID`,`PARAM_KEY`),
hive-metastore-1     | KEY `TABLE_PARAMS_N49` (`TBL_ID`),
hive-metastore-1     | CONSTRAINT `TABLE_PARAMS_FK1` FOREIGN KEY (`TBL_ID`) REFERENCES `TBLS` (`TBL_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 300/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 301/506
hive-metastore-1     | 302/506      --
hive-metastore-1     | 303/506      -- Table structure for table `MV_CREATION_METADATA`
hive-metastore-1     | 304/506      --
hive-metastore-1     | 305/506
hive-metastore-1     | 306/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 307/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 308/506      CREATE TABLE IF NOT EXISTS `MV_CREATION_METADATA` (
hive-metastore-1     | `MV_CREATION_METADATA_ID` bigint(20) NOT NULL,
hive-metastore-1     | `CAT_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `TBL_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `TXN_LIST` TEXT DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`MV_CREATION_METADATA_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 309/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 310/506
hive-metastore-1     | 311/506      CREATE INDEX MV_UNIQUE_TABLE ON MV_CREATION_METADATA (TBL_NAME, DB_NAME) USING BTREE;
hive-metastore-1     | No rows affected (0.004 seconds)
hive-metastore-1     | 312/506
hive-metastore-1     | 313/506      --
hive-metastore-1     | 314/506      -- Table structure for table `TBLS`
hive-metastore-1     | 315/506      --
hive-metastore-1     | 316/506
hive-metastore-1     | 317/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 318/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 319/506      CREATE TABLE IF NOT EXISTS `TBLS` (
hive-metastore-1     | `TBL_ID` bigint(20) NOT NULL,
hive-metastore-1     | `CREATE_TIME` int(11) NOT NULL,
hive-metastore-1     | `DB_ID` bigint(20) DEFAULT NULL,
hive-metastore-1     | `LAST_ACCESS_TIME` int(11) NOT NULL,
hive-metastore-1     | `OWNER` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `OWNER_TYPE` varchar(10) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `RETENTION` int(11) NOT NULL,
hive-metastore-1     | `SD_ID` bigint(20) DEFAULT NULL,
hive-metastore-1     | `TBL_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `TBL_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `VIEW_EXPANDED_TEXT` mediumtext,
hive-metastore-1     | `VIEW_ORIGINAL_TEXT` mediumtext,
hive-metastore-1     | `IS_REWRITE_ENABLED` bit(1) NOT NULL DEFAULT 0,
hive-metastore-1     | PRIMARY KEY (`TBL_ID`),
hive-metastore-1     | UNIQUE KEY `UNIQUETABLE` (`TBL_NAME`,`DB_ID`),
hive-metastore-1     | KEY `TBLS_N50` (`SD_ID`),
hive-metastore-1     | KEY `TBLS_N49` (`DB_ID`),
hive-metastore-1     | CONSTRAINT `TBLS_FK1` FOREIGN KEY (`SD_ID`) REFERENCES `SDS` (`SD_ID`),
hive-metastore-1     | CONSTRAINT `TBLS_FK2` FOREIGN KEY (`DB_ID`) REFERENCES `DBS` (`DB_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.005 seconds)
hive-metastore-1     | 320/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 321/506
hive-metastore-1     | 322/506      --
hive-metastore-1     | 323/506      -- Table structure for table `MV_TABLES_USED`
hive-metastore-1     | 324/506      --
hive-metastore-1     | 325/506
hive-metastore-1     | 326/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 327/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 328/506      CREATE TABLE IF NOT EXISTS `MV_TABLES_USED` (
hive-metastore-1     | `MV_CREATION_METADATA_ID` bigint(20) NOT NULL,
hive-metastore-1     | `TBL_ID` bigint(20) NOT NULL,
hive-metastore-1     | CONSTRAINT `MV_TABLES_USED_FK1` FOREIGN KEY (`MV_CREATION_METADATA_ID`) REFERENCES `MV_CREATION_METADATA` (`MV_CREATION_METADATA_ID`),
hive-metastore-1     | CONSTRAINT `MV_TABLES_USED_FK2` FOREIGN KEY (`TBL_ID`) REFERENCES `TBLS` (`TBL_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 329/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 330/506
hive-metastore-1     | 331/506      --
hive-metastore-1     | 332/506      -- Table structure for table `TBL_COL_PRIVS`
hive-metastore-1     | 333/506      --
hive-metastore-1     | 334/506
hive-metastore-1     | 335/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 336/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 337/506      CREATE TABLE IF NOT EXISTS `TBL_COL_PRIVS` (
hive-metastore-1     | `TBL_COLUMN_GRANT_ID` bigint(20) NOT NULL,
hive-metastore-1     | `COLUMN_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `CREATE_TIME` int(11) NOT NULL,
hive-metastore-1     | `GRANT_OPTION` smallint(6) NOT NULL,
hive-metastore-1     | `GRANTOR` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `GRANTOR_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `PRINCIPAL_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `PRINCIPAL_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `TBL_COL_PRIV` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `TBL_ID` bigint(20) DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`TBL_COLUMN_GRANT_ID`),
hive-metastore-1     | KEY `TABLECOLUMNPRIVILEGEINDEX` (`TBL_ID`,`COLUMN_NAME`,`PRINCIPAL_NAME`,`PRINCIPAL_TYPE`,`TBL_COL_PRIV`,`GRANTOR`,`GRANTOR_TYPE`),
hive-metastore-1     | KEY `TBL_COL_PRIVS_N49` (`TBL_ID`),
hive-metastore-1     | CONSTRAINT `TBL_COL_PRIVS_FK1` FOREIGN KEY (`TBL_ID`) REFERENCES `TBLS` (`TBL_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.007 seconds)
hive-metastore-1     | 338/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 339/506
hive-metastore-1     | 340/506      --
hive-metastore-1     | 341/506      -- Table structure for table `TBL_PRIVS`
hive-metastore-1     | 342/506      --
hive-metastore-1     | 343/506
hive-metastore-1     | 344/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 345/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 346/506      CREATE TABLE IF NOT EXISTS `TBL_PRIVS` (
hive-metastore-1     | `TBL_GRANT_ID` bigint(20) NOT NULL,
hive-metastore-1     | `CREATE_TIME` int(11) NOT NULL,
hive-metastore-1     | `GRANT_OPTION` smallint(6) NOT NULL,
hive-metastore-1     | `GRANTOR` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `GRANTOR_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `PRINCIPAL_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `PRINCIPAL_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `TBL_PRIV` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `TBL_ID` bigint(20) DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`TBL_GRANT_ID`),
hive-metastore-1     | KEY `TBL_PRIVS_N49` (`TBL_ID`),
hive-metastore-1     | KEY `TABLEPRIVILEGEINDEX` (`TBL_ID`,`PRINCIPAL_NAME`,`PRINCIPAL_TYPE`,`TBL_PRIV`,`GRANTOR`,`GRANTOR_TYPE`),
hive-metastore-1     | CONSTRAINT `TBL_PRIVS_FK1` FOREIGN KEY (`TBL_ID`) REFERENCES `TBLS` (`TBL_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.008 seconds)
hive-metastore-1     | 347/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 348/506
hive-metastore-1     | 349/506      --
hive-metastore-1     | 350/506      -- Table structure for table `TAB_COL_STATS`
hive-metastore-1     | 351/506      --
hive-metastore-1     | 352/506      CREATE TABLE IF NOT EXISTS `TAB_COL_STATS` (
hive-metastore-1     | `CS_ID` bigint(20) NOT NULL,
hive-metastore-1     | `CAT_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `TABLE_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `COLUMN_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `COLUMN_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `TBL_ID` bigint(20) NOT NULL,
hive-metastore-1     | `LONG_LOW_VALUE` bigint(20),
hive-metastore-1     | `LONG_HIGH_VALUE` bigint(20),
hive-metastore-1     | `DOUBLE_HIGH_VALUE` double(53,4),
hive-metastore-1     | `DOUBLE_LOW_VALUE` double(53,4),
hive-metastore-1     | `BIG_DECIMAL_LOW_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
hive-metastore-1     | `BIG_DECIMAL_HIGH_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
hive-metastore-1     | `NUM_NULLS` bigint(20) NOT NULL,
hive-metastore-1     | `NUM_DISTINCTS` bigint(20),
hive-metastore-1     | `BIT_VECTOR` blob,
hive-metastore-1     | `AVG_COL_LEN` double(53,4),
hive-metastore-1     | `MAX_COL_LEN` bigint(20),
hive-metastore-1     | `NUM_TRUES` bigint(20),
hive-metastore-1     | `NUM_FALSES` bigint(20),
hive-metastore-1     | `LAST_ANALYZED` bigint(20) NOT NULL,
hive-metastore-1     | PRIMARY KEY (`CS_ID`),
hive-metastore-1     | CONSTRAINT `TAB_COL_STATS_FK` FOREIGN KEY (`TBL_ID`) REFERENCES `TBLS` (`TBL_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 353/506
hive-metastore-1     | 354/506      --
hive-metastore-1     | 355/506      -- Table structure for table `PART_COL_STATS`
hive-metastore-1     | 356/506      --
hive-metastore-1     | 357/506      CREATE TABLE IF NOT EXISTS `PART_COL_STATS` (
hive-metastore-1     | `CS_ID` bigint(20) NOT NULL,
hive-metastore-1     | `CAT_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `TABLE_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `COLUMN_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `COLUMN_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `PART_ID` bigint(20) NOT NULL,
hive-metastore-1     | `LONG_LOW_VALUE` bigint(20),
hive-metastore-1     | `LONG_HIGH_VALUE` bigint(20),
hive-metastore-1     | `DOUBLE_HIGH_VALUE` double(53,4),
hive-metastore-1     | `DOUBLE_LOW_VALUE` double(53,4),
hive-metastore-1     | `BIG_DECIMAL_LOW_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
hive-metastore-1     | `BIG_DECIMAL_HIGH_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
hive-metastore-1     | `NUM_NULLS` bigint(20) NOT NULL,
hive-metastore-1     | `NUM_DISTINCTS` bigint(20),
hive-metastore-1     | `BIT_VECTOR` blob,
hive-metastore-1     | `AVG_COL_LEN` double(53,4),
hive-metastore-1     | `MAX_COL_LEN` bigint(20),
hive-metastore-1     | `NUM_TRUES` bigint(20),
hive-metastore-1     | `NUM_FALSES` bigint(20),
hive-metastore-1     | `LAST_ANALYZED` bigint(20) NOT NULL,
hive-metastore-1     | PRIMARY KEY (`CS_ID`),
hive-metastore-1     | CONSTRAINT `PART_COL_STATS_FK` FOREIGN KEY (`PART_ID`) REFERENCES `PARTITIONS` (`PART_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.004 seconds)
hive-metastore-1     | 358/506
hive-metastore-1     | 359/506      CREATE INDEX PCS_STATS_IDX ON PART_COL_STATS (CAT_NAME, DB_NAME,TABLE_NAME,COLUMN_NAME,PARTITION_NAME) USING BTREE;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 360/506
hive-metastore-1     | 361/506      --
hive-metastore-1     | 362/506      -- Table structure for table `TYPES`
hive-metastore-1     | 363/506      --
hive-metastore-1     | 364/506
hive-metastore-1     | 365/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 366/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 367/506      CREATE TABLE IF NOT EXISTS `TYPES` (
hive-metastore-1     | `TYPES_ID` bigint(20) NOT NULL,
hive-metastore-1     | `TYPE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `TYPE1` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `TYPE2` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | PRIMARY KEY (`TYPES_ID`),
hive-metastore-1     | UNIQUE KEY `UNIQUE_TYPE` (`TYPE_NAME`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 368/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 369/506
hive-metastore-1     | 370/506      --
hive-metastore-1     | 371/506      -- Table structure for table `TYPE_FIELDS`
hive-metastore-1     | 372/506      --
hive-metastore-1     | 373/506
hive-metastore-1     | 374/506      /*!40101 SET @saved_cs_client     = @@character_set_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 375/506      /*!40101 SET character_set_client = utf8 */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 376/506      CREATE TABLE IF NOT EXISTS `TYPE_FIELDS` (
hive-metastore-1     | `TYPE_NAME` bigint(20) NOT NULL,
hive-metastore-1     | `COMMENT` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
hive-metastore-1     | `FIELD_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `FIELD_TYPE` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
hive-metastore-1     | `INTEGER_IDX` int(11) NOT NULL,
hive-metastore-1     | PRIMARY KEY (`TYPE_NAME`,`FIELD_NAME`),
hive-metastore-1     | KEY `TYPE_FIELDS_N49` (`TYPE_NAME`),
hive-metastore-1     | CONSTRAINT `TYPE_FIELDS_FK1` FOREIGN KEY (`TYPE_NAME`) REFERENCES `TYPES` (`TYPES_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 377/506
hive-metastore-1     | 378/506      -- Table `MASTER_KEYS` for classes [org.apache.hadoop.hive.metastore.model.MMasterKey]
hive-metastore-1     | 379/506      CREATE TABLE IF NOT EXISTS `MASTER_KEYS`
hive-metastore-1     | (
hive-metastore-1     | `KEY_ID` INTEGER NOT NULL AUTO_INCREMENT,
hive-metastore-1     | `MASTER_KEY` VARCHAR(767) BINARY NULL,
hive-metastore-1     | PRIMARY KEY (`KEY_ID`)
hive-metastore-1     | ) ENGINE=INNODB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.004 seconds)
hive-metastore-1     | 380/506
hive-metastore-1     | 381/506      -- Table `DELEGATION_TOKENS` for classes [org.apache.hadoop.hive.metastore.model.MDelegationToken]
hive-metastore-1     | 382/506      CREATE TABLE IF NOT EXISTS `DELEGATION_TOKENS`
hive-metastore-1     | (
hive-metastore-1     | `TOKEN_IDENT` VARCHAR(767) BINARY NOT NULL,
hive-metastore-1     | `TOKEN` VARCHAR(767) BINARY NULL,
hive-metastore-1     | PRIMARY KEY (`TOKEN_IDENT`)
hive-metastore-1     | ) ENGINE=INNODB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 383/506
hive-metastore-1     | 384/506      --
hive-metastore-1     | 385/506      -- Table structure for VERSION
hive-metastore-1     | 386/506      --
hive-metastore-1     | 387/506      CREATE TABLE IF NOT EXISTS `VERSION` (
hive-metastore-1     | `VER_ID` BIGINT NOT NULL,
hive-metastore-1     | `SCHEMA_VERSION` VARCHAR(127) NOT NULL,
hive-metastore-1     | `VERSION_COMMENT` VARCHAR(255),
hive-metastore-1     | PRIMARY KEY (`VER_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 388/506
hive-metastore-1     | 389/506      --
hive-metastore-1     | 390/506      -- Table structure for table FUNCS
hive-metastore-1     | 391/506      --
hive-metastore-1     | 392/506      CREATE TABLE IF NOT EXISTS `FUNCS` (
hive-metastore-1     | `FUNC_ID` BIGINT(20) NOT NULL,
hive-metastore-1     | `CLASS_NAME` VARCHAR(4000) CHARACTER SET latin1 COLLATE latin1_bin,
hive-metastore-1     | `CREATE_TIME` INT(11) NOT NULL,
hive-metastore-1     | `DB_ID` BIGINT(20),
hive-metastore-1     | `FUNC_NAME` VARCHAR(128) CHARACTER SET latin1 COLLATE latin1_bin,
hive-metastore-1     | `FUNC_TYPE` INT(11) NOT NULL,
hive-metastore-1     | `OWNER_NAME` VARCHAR(128) CHARACTER SET latin1 COLLATE latin1_bin,
hive-metastore-1     | `OWNER_TYPE` VARCHAR(10) CHARACTER SET latin1 COLLATE latin1_bin,
hive-metastore-1     | PRIMARY KEY (`FUNC_ID`),
hive-metastore-1     | UNIQUE KEY `UNIQUEFUNCTION` (`FUNC_NAME`, `DB_ID`),
hive-metastore-1     | KEY `FUNCS_N49` (`DB_ID`),
hive-metastore-1     | CONSTRAINT `FUNCS_FK1` FOREIGN KEY (`DB_ID`) REFERENCES `DBS` (`DB_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.004 seconds)
hive-metastore-1     | 393/506
hive-metastore-1     | 394/506      --
hive-metastore-1     | 395/506      -- Table structure for table FUNC_RU
hive-metastore-1     | 396/506      --
hive-metastore-1     | 397/506      CREATE TABLE IF NOT EXISTS `FUNC_RU` (
hive-metastore-1     | `FUNC_ID` BIGINT(20) NOT NULL,
hive-metastore-1     | `RESOURCE_TYPE` INT(11) NOT NULL,
hive-metastore-1     | `RESOURCE_URI` VARCHAR(4000) CHARACTER SET latin1 COLLATE latin1_bin,
hive-metastore-1     | `INTEGER_IDX` INT(11) NOT NULL,
hive-metastore-1     | PRIMARY KEY (`FUNC_ID`, `INTEGER_IDX`),
hive-metastore-1     | CONSTRAINT `FUNC_RU_FK1` FOREIGN KEY (`FUNC_ID`) REFERENCES `FUNCS` (`FUNC_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 398/506
hive-metastore-1     | 399/506      CREATE TABLE IF NOT EXISTS `NOTIFICATION_LOG`
hive-metastore-1     | (
hive-metastore-1     | `NL_ID` BIGINT(20) NOT NULL,
hive-metastore-1     | `EVENT_ID` BIGINT(20) NOT NULL,
hive-metastore-1     | `EVENT_TIME` INT(11) NOT NULL,
hive-metastore-1     | `EVENT_TYPE` varchar(32) NOT NULL,
hive-metastore-1     | `CAT_NAME` varchar(256),
hive-metastore-1     | `DB_NAME` varchar(128),
hive-metastore-1     | `TBL_NAME` varchar(256),
hive-metastore-1     | `MESSAGE` longtext,
hive-metastore-1     | `MESSAGE_FORMAT` varchar(16),
hive-metastore-1     | PRIMARY KEY (`NL_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.004 seconds)
hive-metastore-1     | 400/506
hive-metastore-1     | 401/506      CREATE TABLE IF NOT EXISTS `NOTIFICATION_SEQUENCE`
hive-metastore-1     | (
hive-metastore-1     | `NNI_ID` BIGINT(20) NOT NULL,
hive-metastore-1     | `NEXT_EVENT_ID` BIGINT(20) NOT NULL,
hive-metastore-1     | PRIMARY KEY (`NNI_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 402/506
hive-metastore-1     | 403/506      INSERT INTO `NOTIFICATION_SEQUENCE` (`NNI_ID`, `NEXT_EVENT_ID`) SELECT * from (select 1 as `NNI_ID`, 1 as `NOTIFICATION_SEQUENCE`) a WHERE (SELECT COUNT(*) FROM `NOTIFICATION_SEQUENCE`) = 0;
hive-metastore-1     | 1 row affected (0.001 seconds)
hive-metastore-1     | 404/506
hive-metastore-1     | 405/506      CREATE TABLE IF NOT EXISTS `KEY_CONSTRAINTS`
hive-metastore-1     | (
hive-metastore-1     | `CHILD_CD_ID` BIGINT,
hive-metastore-1     | `CHILD_INTEGER_IDX` INT(11),
hive-metastore-1     | `CHILD_TBL_ID` BIGINT,
hive-metastore-1     | `PARENT_CD_ID` BIGINT,
hive-metastore-1     | `PARENT_INTEGER_IDX` INT(11) NOT NULL,
hive-metastore-1     | `PARENT_TBL_ID` BIGINT NOT NULL,
hive-metastore-1     | `POSITION` BIGINT NOT NULL,
hive-metastore-1     | `CONSTRAINT_NAME` VARCHAR(400) NOT NULL,
hive-metastore-1     | `CONSTRAINT_TYPE` SMALLINT(6)  NOT NULL,
hive-metastore-1     | `UPDATE_RULE` SMALLINT(6),
hive-metastore-1     | `DELETE_RULE` SMALLINT(6),
hive-metastore-1     | `ENABLE_VALIDATE_RELY` SMALLINT(6) NOT NULL,
hive-metastore-1     | `DEFAULT_VALUE` VARCHAR(400),
hive-metastore-1     | PRIMARY KEY (`CONSTRAINT_NAME`, `POSITION`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 406/506
hive-metastore-1     | 407/506      CREATE INDEX `CONSTRAINTS_PARENT_TABLE_ID_INDEX` ON KEY_CONSTRAINTS (`PARENT_TBL_ID`) USING BTREE;
hive-metastore-1     | No rows affected (0.004 seconds)
hive-metastore-1     | 408/506
hive-metastore-1     | 409/506      CREATE INDEX `CONSTRAINTS_CONSTRAINT_TYPE_INDEX` ON KEY_CONSTRAINTS (`CONSTRAINT_TYPE`) USING BTREE;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 410/506
hive-metastore-1     | 411/506      -- -----------------------------
hive-metastore-1     | 412/506      -- Metastore DB Properties table
hive-metastore-1     | 413/506      -- -----------------------------
hive-metastore-1     | 414/506      CREATE TABLE IF NOT EXISTS `METASTORE_DB_PROPERTIES` (
hive-metastore-1     | `PROPERTY_KEY` varchar(255) NOT NULL,
hive-metastore-1     | `PROPERTY_VALUE` varchar(1000) NOT NULL,
hive-metastore-1     | `DESCRIPTION` varchar(1000),
hive-metastore-1     | PRIMARY KEY(`PROPERTY_KEY`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 415/506
hive-metastore-1     | 416/506
hive-metastore-1     | 417/506      -- ---------------------
hive-metastore-1     | 418/506      -- Resource plan tables.
hive-metastore-1     | 419/506      -- ---------------------
hive-metastore-1     | 420/506      CREATE TABLE IF NOT EXISTS WM_RESOURCEPLAN (
hive-metastore-1     | `RP_ID` bigint(20) NOT NULL,
hive-metastore-1     | `NAME` varchar(128) NOT NULL,
hive-metastore-1     | `QUERY_PARALLELISM` int(11),
hive-metastore-1     | `STATUS` varchar(20) NOT NULL,
hive-metastore-1     | `DEFAULT_POOL_ID` bigint(20),
hive-metastore-1     | PRIMARY KEY (`RP_ID`),
hive-metastore-1     | UNIQUE KEY `UNIQUE_WM_RESOURCEPLAN` (`NAME`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 421/506
hive-metastore-1     | 422/506      CREATE TABLE IF NOT EXISTS WM_POOL
hive-metastore-1     | (
hive-metastore-1     | `POOL_ID` bigint(20) NOT NULL,
hive-metastore-1     | `RP_ID` bigint(20) NOT NULL,
hive-metastore-1     | `PATH` varchar(767) NOT NULL,
hive-metastore-1     | `ALLOC_FRACTION` DOUBLE,
hive-metastore-1     | `QUERY_PARALLELISM` int(11),
hive-metastore-1     | `SCHEDULING_POLICY` varchar(767),
hive-metastore-1     | PRIMARY KEY (`POOL_ID`),
hive-metastore-1     | UNIQUE KEY `UNIQUE_WM_POOL` (`RP_ID`, `PATH`),
hive-metastore-1     | CONSTRAINT `WM_POOL_FK1` FOREIGN KEY (`RP_ID`) REFERENCES `WM_RESOURCEPLAN` (`RP_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 423/506
hive-metastore-1     | 424/506      ALTER TABLE `WM_RESOURCEPLAN` ADD CONSTRAINT `WM_RESOURCEPLAN_FK1` FOREIGN KEY (`DEFAULT_POOL_ID`) REFERENCES `WM_POOL`(`POOL_ID`);
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 425/506
hive-metastore-1     | 426/506      CREATE TABLE IF NOT EXISTS WM_TRIGGER
hive-metastore-1     | (
hive-metastore-1     | `TRIGGER_ID` bigint(20) NOT NULL,
hive-metastore-1     | `RP_ID` bigint(20) NOT NULL,
hive-metastore-1     | `NAME` varchar(128) NOT NULL,
hive-metastore-1     | `TRIGGER_EXPRESSION` varchar(1024),
hive-metastore-1     | `ACTION_EXPRESSION` varchar(1024),
hive-metastore-1     | `IS_IN_UNMANAGED` bit(1) NOT NULL DEFAULT 0,
hive-metastore-1     | PRIMARY KEY (`TRIGGER_ID`),
hive-metastore-1     | UNIQUE KEY `UNIQUE_WM_TRIGGER` (`RP_ID`, `NAME`),
hive-metastore-1     | CONSTRAINT `WM_TRIGGER_FK1` FOREIGN KEY (`RP_ID`) REFERENCES `WM_RESOURCEPLAN` (`RP_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.006 seconds)
hive-metastore-1     | 427/506
hive-metastore-1     | 428/506      CREATE TABLE IF NOT EXISTS WM_POOL_TO_TRIGGER
hive-metastore-1     | (
hive-metastore-1     | `POOL_ID` bigint(20) NOT NULL,
hive-metastore-1     | `TRIGGER_ID` bigint(20) NOT NULL,
hive-metastore-1     | PRIMARY KEY (`POOL_ID`, `TRIGGER_ID`),
hive-metastore-1     | CONSTRAINT `WM_POOL_TO_TRIGGER_FK1` FOREIGN KEY (`POOL_ID`) REFERENCES `WM_POOL` (`POOL_ID`),
hive-metastore-1     | CONSTRAINT `WM_POOL_TO_TRIGGER_FK2` FOREIGN KEY (`TRIGGER_ID`) REFERENCES `WM_TRIGGER` (`TRIGGER_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 429/506
hive-metastore-1     | 430/506      CREATE TABLE IF NOT EXISTS WM_MAPPING
hive-metastore-1     | (
hive-metastore-1     | `MAPPING_ID` bigint(20) NOT NULL,
hive-metastore-1     | `RP_ID` bigint(20) NOT NULL,
hive-metastore-1     | `ENTITY_TYPE` varchar(128) NOT NULL,
hive-metastore-1     | `ENTITY_NAME` varchar(128) NOT NULL,
hive-metastore-1     | `POOL_ID` bigint(20),
hive-metastore-1     | `ORDERING` int,
hive-metastore-1     | PRIMARY KEY (`MAPPING_ID`),
hive-metastore-1     | UNIQUE KEY `UNIQUE_WM_MAPPING` (`RP_ID`, `ENTITY_TYPE`, `ENTITY_NAME`),
hive-metastore-1     | CONSTRAINT `WM_MAPPING_FK1` FOREIGN KEY (`RP_ID`) REFERENCES `WM_RESOURCEPLAN` (`RP_ID`),
hive-metastore-1     | CONSTRAINT `WM_MAPPING_FK2` FOREIGN KEY (`POOL_ID`) REFERENCES `WM_POOL` (`POOL_ID`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 431/506
hive-metastore-1     | 432/506      -- ----------------------------
hive-metastore-1     | 433/506      -- Transaction and Lock Tables
hive-metastore-1     | 434/506      -- ----------------------------
hive-metastore-1     | 435/506      CREATE TABLE TXNS (
hive-metastore-1     | TXN_ID bigint PRIMARY KEY,
hive-metastore-1     | TXN_STATE char(1) NOT NULL,
hive-metastore-1     | TXN_STARTED bigint NOT NULL,
hive-metastore-1     | TXN_LAST_HEARTBEAT bigint NOT NULL,
hive-metastore-1     | TXN_USER varchar(128) NOT NULL,
hive-metastore-1     | TXN_HOST varchar(128) NOT NULL,
hive-metastore-1     | TXN_AGENT_INFO varchar(128),
hive-metastore-1     | TXN_META_INFO varchar(128),
hive-metastore-1     | TXN_HEARTBEAT_COUNT int
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 436/506
hive-metastore-1     | 437/506      CREATE TABLE TXN_COMPONENTS (
hive-metastore-1     | TC_TXNID bigint NOT NULL,
hive-metastore-1     | TC_DATABASE varchar(128) NOT NULL,
hive-metastore-1     | TC_TABLE varchar(128),
hive-metastore-1     | TC_PARTITION varchar(767),
hive-metastore-1     | TC_OPERATION_TYPE char(1) NOT NULL,
hive-metastore-1     | TC_WRITEID bigint,
hive-metastore-1     | FOREIGN KEY (TC_TXNID) REFERENCES TXNS (TXN_ID)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 438/506
hive-metastore-1     | 439/506      CREATE INDEX TC_TXNID_INDEX ON TXN_COMPONENTS (TC_TXNID);
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 440/506
hive-metastore-1     | 441/506      CREATE TABLE COMPLETED_TXN_COMPONENTS (
hive-metastore-1     | CTC_TXNID bigint NOT NULL,
hive-metastore-1     | CTC_DATABASE varchar(128) NOT NULL,
hive-metastore-1     | CTC_TABLE varchar(256),
hive-metastore-1     | CTC_PARTITION varchar(767),
hive-metastore-1     | CTC_TIMESTAMP timestamp DEFAULT CURRENT_TIMESTAMP NOT NULL,
hive-metastore-1     | CTC_WRITEID bigint
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 442/506
hive-metastore-1     | 443/506      CREATE INDEX COMPLETED_TXN_COMPONENTS_IDX ON COMPLETED_TXN_COMPONENTS (CTC_DATABASE, CTC_TABLE, CTC_PARTITION) USING BTREE;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 444/506
hive-metastore-1     | 445/506      CREATE TABLE NEXT_TXN_ID (
hive-metastore-1     | NTXN_NEXT bigint NOT NULL
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.004 seconds)
hive-metastore-1     | 446/506      INSERT INTO NEXT_TXN_ID VALUES(1);
hive-metastore-1     | 1 row affected (0.001 seconds)
hive-metastore-1     | 447/506
hive-metastore-1     | 448/506      CREATE TABLE HIVE_LOCKS (
hive-metastore-1     | HL_LOCK_EXT_ID bigint NOT NULL,
hive-metastore-1     | HL_LOCK_INT_ID bigint NOT NULL,
hive-metastore-1     | HL_TXNID bigint NOT NULL,
hive-metastore-1     | HL_DB varchar(128) NOT NULL,
hive-metastore-1     | HL_TABLE varchar(128),
hive-metastore-1     | HL_PARTITION varchar(767),
hive-metastore-1     | HL_LOCK_STATE char(1) not null,
hive-metastore-1     | HL_LOCK_TYPE char(1) not null,
hive-metastore-1     | HL_LAST_HEARTBEAT bigint NOT NULL,
hive-metastore-1     | HL_ACQUIRED_AT bigint,
hive-metastore-1     | HL_USER varchar(128) NOT NULL,
hive-metastore-1     | HL_HOST varchar(128) NOT NULL,
hive-metastore-1     | HL_HEARTBEAT_COUNT int,
hive-metastore-1     | HL_AGENT_INFO varchar(128),
hive-metastore-1     | HL_BLOCKEDBY_EXT_ID bigint,
hive-metastore-1     | HL_BLOCKEDBY_INT_ID bigint,
hive-metastore-1     | PRIMARY KEY(HL_LOCK_EXT_ID, HL_LOCK_INT_ID),
hive-metastore-1     | KEY HIVE_LOCK_TXNID_INDEX (HL_TXNID)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.01 seconds)
hive-metastore-1     | 449/506
hive-metastore-1     | 450/506      CREATE INDEX HL_TXNID_IDX ON HIVE_LOCKS (HL_TXNID);
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 451/506
hive-metastore-1     | 452/506      CREATE TABLE NEXT_LOCK_ID (
hive-metastore-1     | NL_NEXT bigint NOT NULL
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 453/506      INSERT INTO NEXT_LOCK_ID VALUES(1);
hive-metastore-1     | 1 row affected (0 seconds)
hive-metastore-1     | 454/506
hive-metastore-1     | 455/506      CREATE TABLE COMPACTION_QUEUE (
hive-metastore-1     | CQ_ID bigint PRIMARY KEY,
hive-metastore-1     | CQ_DATABASE varchar(128) NOT NULL,
hive-metastore-1     | CQ_TABLE varchar(128) NOT NULL,
hive-metastore-1     | CQ_PARTITION varchar(767),
hive-metastore-1     | CQ_STATE char(1) NOT NULL,
hive-metastore-1     | CQ_TYPE char(1) NOT NULL,
hive-metastore-1     | CQ_TBLPROPERTIES varchar(2048),
hive-metastore-1     | CQ_WORKER_ID varchar(128),
hive-metastore-1     | CQ_START bigint,
hive-metastore-1     | CQ_RUN_AS varchar(128),
hive-metastore-1     | CQ_HIGHEST_WRITE_ID bigint,
hive-metastore-1     | CQ_META_INFO varbinary(2048),
hive-metastore-1     | CQ_HADOOP_JOB_ID varchar(32)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.004 seconds)
hive-metastore-1     | 456/506
hive-metastore-1     | 457/506      CREATE TABLE COMPLETED_COMPACTIONS (
hive-metastore-1     | CC_ID bigint PRIMARY KEY,
hive-metastore-1     | CC_DATABASE varchar(128) NOT NULL,
hive-metastore-1     | CC_TABLE varchar(128) NOT NULL,
hive-metastore-1     | CC_PARTITION varchar(767),
hive-metastore-1     | CC_STATE char(1) NOT NULL,
hive-metastore-1     | CC_TYPE char(1) NOT NULL,
hive-metastore-1     | CC_TBLPROPERTIES varchar(2048),
hive-metastore-1     | CC_WORKER_ID varchar(128),
hive-metastore-1     | CC_START bigint,
hive-metastore-1     | CC_END bigint,
hive-metastore-1     | CC_RUN_AS varchar(128),
hive-metastore-1     | CC_HIGHEST_WRITE_ID bigint,
hive-metastore-1     | CC_META_INFO varbinary(2048),
hive-metastore-1     | CC_HADOOP_JOB_ID varchar(32)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 458/506
hive-metastore-1     | 459/506      CREATE TABLE NEXT_COMPACTION_QUEUE_ID (
hive-metastore-1     | NCQ_NEXT bigint NOT NULL
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 460/506      INSERT INTO NEXT_COMPACTION_QUEUE_ID VALUES(1);
hive-metastore-1     | 1 row affected (0 seconds)
hive-metastore-1     | 461/506
hive-metastore-1     | 462/506      CREATE TABLE AUX_TABLE (
hive-metastore-1     | MT_KEY1 varchar(128) NOT NULL,
hive-metastore-1     | MT_KEY2 bigint NOT NULL,
hive-metastore-1     | MT_COMMENT varchar(255),
hive-metastore-1     | PRIMARY KEY(MT_KEY1, MT_KEY2)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 463/506
hive-metastore-1     | 464/506      CREATE TABLE WRITE_SET (
hive-metastore-1     | WS_DATABASE varchar(128) NOT NULL,
hive-metastore-1     | WS_TABLE varchar(128) NOT NULL,
hive-metastore-1     | WS_PARTITION varchar(767),
hive-metastore-1     | WS_TXNID bigint NOT NULL,
hive-metastore-1     | WS_COMMIT_ID bigint NOT NULL,
hive-metastore-1     | WS_OPERATION_TYPE char(1) NOT NULL
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 465/506
hive-metastore-1     | 466/506      CREATE TABLE TXN_TO_WRITE_ID (
hive-metastore-1     | T2W_TXNID bigint NOT NULL,
hive-metastore-1     | T2W_DATABASE varchar(128) NOT NULL,
hive-metastore-1     | T2W_TABLE varchar(256) NOT NULL,
hive-metastore-1     | T2W_WRITEID bigint NOT NULL
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 467/506
hive-metastore-1     | 468/506      CREATE UNIQUE INDEX TBL_TO_TXN_ID_IDX ON TXN_TO_WRITE_ID (T2W_DATABASE, T2W_TABLE, T2W_TXNID);
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 469/506      CREATE UNIQUE INDEX TBL_TO_WRITE_ID_IDX ON TXN_TO_WRITE_ID (T2W_DATABASE, T2W_TABLE, T2W_WRITEID);
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 470/506
hive-metastore-1     | 471/506      CREATE TABLE NEXT_WRITE_ID (
hive-metastore-1     | NWI_DATABASE varchar(128) NOT NULL,
hive-metastore-1     | NWI_TABLE varchar(256) NOT NULL,
hive-metastore-1     | NWI_NEXT bigint NOT NULL
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.004 seconds)
hive-metastore-1     | 472/506
hive-metastore-1     | 473/506      CREATE UNIQUE INDEX NEXT_WRITE_ID_IDX ON NEXT_WRITE_ID (NWI_DATABASE, NWI_TABLE);
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 474/506
hive-metastore-1     | 475/506      CREATE TABLE MIN_HISTORY_LEVEL (
hive-metastore-1     | MHL_TXNID bigint NOT NULL,
hive-metastore-1     | MHL_MIN_OPEN_TXNID bigint NOT NULL,
hive-metastore-1     | PRIMARY KEY(MHL_TXNID)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 476/506
hive-metastore-1     | 477/506      CREATE INDEX MIN_HISTORY_LEVEL_IDX ON MIN_HISTORY_LEVEL (MHL_MIN_OPEN_TXNID);
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 478/506
hive-metastore-1     | 479/506      CREATE TABLE `I_SCHEMA` (
hive-metastore-1     | `SCHEMA_ID` BIGINT PRIMARY KEY,
hive-metastore-1     | `SCHEMA_TYPE` INTEGER NOT NULL,
hive-metastore-1     | `NAME` VARCHAR(256),
hive-metastore-1     | `DB_ID` BIGINT,
hive-metastore-1     | `COMPATIBILITY` INTEGER NOT NULL,
hive-metastore-1     | `VALIDATION_LEVEL` INTEGER NOT NULL,
hive-metastore-1     | `CAN_EVOLVE` bit(1) NOT NULL,
hive-metastore-1     | `SCHEMA_GROUP` VARCHAR(256),
hive-metastore-1     | `DESCRIPTION` VARCHAR(4000),
hive-metastore-1     | FOREIGN KEY (`DB_ID`) REFERENCES `DBS` (`DB_ID`),
hive-metastore-1     | KEY `UNIQUE_NAME` (`NAME`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.005 seconds)
hive-metastore-1     | 480/506
hive-metastore-1     | 481/506      CREATE TABLE `SCHEMA_VERSION` (
hive-metastore-1     | `SCHEMA_VERSION_ID` bigint primary key,
hive-metastore-1     | `SCHEMA_ID` BIGINT,
hive-metastore-1     | `VERSION` INTEGER NOT NULL,
hive-metastore-1     | `CREATED_AT` BIGINT NOT NULL,
hive-metastore-1     | `CD_ID` BIGINT,
hive-metastore-1     | `STATE` INTEGER NOT NULL,
hive-metastore-1     | `DESCRIPTION` VARCHAR(4000),
hive-metastore-1     | `SCHEMA_TEXT` mediumtext,
hive-metastore-1     | `FINGERPRINT` VARCHAR(256),
hive-metastore-1     | `SCHEMA_VERSION_NAME` VARCHAR(256),
hive-metastore-1     | `SERDE_ID` bigint,
hive-metastore-1     | FOREIGN KEY (`SCHEMA_ID`) REFERENCES `I_SCHEMA` (`SCHEMA_ID`),
hive-metastore-1     | FOREIGN KEY (`CD_ID`) REFERENCES `CDS` (`CD_ID`),
hive-metastore-1     | FOREIGN KEY (`SERDE_ID`) REFERENCES `SERDES` (`SERDE_ID`),
hive-metastore-1     | KEY `UNIQUE_VERSION` (`SCHEMA_ID`, `VERSION`)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.007 seconds)
hive-metastore-1     | 482/506
hive-metastore-1     | 483/506      CREATE TABLE REPL_TXN_MAP (
hive-metastore-1     | RTM_REPL_POLICY varchar(256) NOT NULL,
hive-metastore-1     | RTM_SRC_TXN_ID bigint NOT NULL,
hive-metastore-1     | RTM_TARGET_TXN_ID bigint NOT NULL,
hive-metastore-1     | PRIMARY KEY (RTM_REPL_POLICY, RTM_SRC_TXN_ID)
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
hive-metastore-1     | No rows affected (0.006 seconds)
hive-metastore-1     | 484/506
hive-metastore-1     | 485/506
hive-metastore-1     | 486/506      CREATE TABLE RUNTIME_STATS (
hive-metastore-1     | RS_ID bigint primary key,
hive-metastore-1     | CREATE_TIME bigint NOT NULL,
hive-metastore-1     | WEIGHT bigint NOT NULL,
hive-metastore-1     | PAYLOAD blob
hive-metastore-1     | ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
spark-hudi-1         | 6825 [main] WARN  org.apache.spark.sql.hive.client.HiveClientImpl  - HiveClient got thrift exception, destroying client and retrying (0 tries remaining)
spark-hudi-1         | org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1567)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.databaseExists(Hive.java:1552)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$databaseExists$1(HiveClientImpl.scala:396)
spark-hudi-1         | 	at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:305)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:236)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:235)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:285)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.databaseExists(HiveClientImpl.scala:396)
spark-hudi-1         | 	at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$databaseExists$1(HiveExternalCatalog.scala:224)
spark-hudi-1         | 	at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
spark-hudi-1         | 	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:102)
spark-hudi-1         | 	at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:224)
spark-hudi-1         | 	at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:150)
spark-hudi-1         | 	at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:140)
spark-hudi-1         | 	at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:64)
spark-hudi-1         | 	at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:96)
spark-hudi-1         | 	at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
spark-hudi-1         | 	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
spark-hudi-1         | 	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
spark-hudi-1         | Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1742)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:83)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3607)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3659)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3639)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1563)
spark-hudi-1         | 	... 29 more
spark-hudi-1         | Caused by: java.lang.reflect.InvocationTargetException
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
spark-hudi-1         | 	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1740)
spark-hudi-1         | 	... 36 more
spark-hudi-1         | Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
spark-hudi-1         | 	at org.apache.thrift.transport.TSocket.open(TSocket.java:226)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:478)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:245)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:70)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
spark-hudi-1         | 	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1740)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:83)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3607)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3659)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3639)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1563)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.databaseExists(Hive.java:1552)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$databaseExists$1(HiveClientImpl.scala:396)
spark-hudi-1         | 	at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:305)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:236)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:235)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:285)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.databaseExists(HiveClientImpl.scala:396)
spark-hudi-1         | 	at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$databaseExists$1(HiveExternalCatalog.scala:224)
spark-hudi-1         | 	at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
spark-hudi-1         | 	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:102)
spark-hudi-1         | 	at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:224)
spark-hudi-1         | 	at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:150)
spark-hudi-1         | 	at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:140)
spark-hudi-1         | 	at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:64)
spark-hudi-1         | 	at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:96)
spark-hudi-1         | 	at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
spark-hudi-1         | 	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
spark-hudi-1         | 	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
spark-hudi-1         | Caused by: java.net.ConnectException: Connection refused (Connection refused)
spark-hudi-1         | 	at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
spark-hudi-1         | 	at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412)
spark-hudi-1         | 	at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255)
spark-hudi-1         | 	at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237)
spark-hudi-1         | 	at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
spark-hudi-1         | 	at java.base/java.net.Socket.connect(Socket.java:615)
spark-hudi-1         | 	at org.apache.thrift.transport.TSocket.open(TSocket.java:221)
spark-hudi-1         | 	... 44 more
spark-hudi-1         | )
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:527)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:245)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:70)
spark-hudi-1         | 	... 41 more
hive-metastore-1     | No rows affected (0.003 seconds)
hive-metastore-1     | 487/506
hive-metastore-1     | 488/506      CREATE INDEX IDX_RUNTIME_STATS_CREATE_TIME ON RUNTIME_STATS(CREATE_TIME);
hive-metastore-1     | No rows affected (0.002 seconds)
hive-metastore-1     | 489/506
hive-metastore-1     | 490/506      -- -----------------------------------------------------------------
hive-metastore-1     | 491/506      -- Record schema version. Should be the last step in the init script
hive-metastore-1     | 492/506      -- -----------------------------------------------------------------
hive-metastore-1     | 493/506      INSERT INTO VERSION (VER_ID, SCHEMA_VERSION, VERSION_COMMENT) VALUES (1, '3.0.0', 'Hive release version 3.0.0');
hive-metastore-1     | 1 row affected (0 seconds)
hive-metastore-1     | 494/506
hive-metastore-1     | 495/506      /*!40101 SET character_set_client = @saved_cs_client */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 496/506      /*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 497/506
hive-metastore-1     | 498/506      /*!40101 SET SQL_MODE=@OLD_SQL_MODE */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 499/506      /*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 500/506      /*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 501/506      /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 502/506      /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
hive-metastore-1     | No rows affected (0.001 seconds)
hive-metastore-1     | 503/506      /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 504/506      /*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */;
hive-metastore-1     | No rows affected (0 seconds)
hive-metastore-1     | 505/506
hive-metastore-1     | 506/506      -- Dump completed on 2012-08-23  0:56:31
hive-metastore-1     | Closing: com.mysql.cj.jdbc.ConnectionImpl
hive-metastore-1     | sqlline version 1.3.0
hive-metastore-1     | Initialization script completed
hive-metastore-1     | schemaTool completed
hive-metastore-1     | 2024-02-02 16:59:25,944 shutdown-hook-0 INFO Log4j appears to be running in a Servlet environment, but there's no log4j-web module available. If you want better web container support, please add the log4j-web JAR to your web archive or server lib directory.
hive-metastore-1     | 2024-02-02 16:59:25,955 shutdown-hook-0 INFO Log4j appears to be running in a Servlet environment, but there's no log4j-web module available. If you want better web container support, please add the log4j-web JAR to your web archive or server lib directory.
hive-metastore-1     | 2024-02-02 16:59:25,981 shutdown-hook-0 WARN Unable to register Log4j shutdown hook because JVM is shutting down. Using SimpleLogger
hive-metastore-1     | 2024-02-02 16:59:26: Starting Metastore Server
spark-hudi-1         | 7829 [main] WARN  org.apache.spark.sql.hive.client.HiveClientImpl  - Deadline exceeded
spark-hudi-1         | Exception in thread "main" org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
spark-hudi-1         | 	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:111)
spark-hudi-1         | 	at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:224)
spark-hudi-1         | 	at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:150)
spark-hudi-1         | 	at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:140)
spark-hudi-1         | 	at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:64)
spark-hudi-1         | 	at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:96)
spark-hudi-1         | 	at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
spark-hudi-1         | 	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
spark-hudi-1         | 	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
spark-hudi-1         | Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1567)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.databaseExists(Hive.java:1552)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$databaseExists$1(HiveClientImpl.scala:396)
spark-hudi-1         | 	at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:305)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:236)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:235)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:285)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.databaseExists(HiveClientImpl.scala:396)
spark-hudi-1         | 	at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$databaseExists$1(HiveExternalCatalog.scala:224)
spark-hudi-1         | 	at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
spark-hudi-1         | 	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:102)
spark-hudi-1         | 	... 18 more
spark-hudi-1         | Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1742)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:83)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3607)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3659)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3639)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1563)
spark-hudi-1         | 	... 29 more
spark-hudi-1         | Caused by: java.lang.reflect.InvocationTargetException
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
spark-hudi-1         | 	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1740)
spark-hudi-1         | 	... 36 more
spark-hudi-1         | Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
spark-hudi-1         | 	at org.apache.thrift.transport.TSocket.open(TSocket.java:226)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:478)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:245)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:70)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
spark-hudi-1         | 	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1740)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:83)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3607)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3659)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3639)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1563)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.Hive.databaseExists(Hive.java:1552)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$databaseExists$1(HiveClientImpl.scala:396)
spark-hudi-1         | 	at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:305)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:236)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:235)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:285)
spark-hudi-1         | 	at org.apache.spark.sql.hive.client.HiveClientImpl.databaseExists(HiveClientImpl.scala:396)
spark-hudi-1         | 	at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$databaseExists$1(HiveExternalCatalog.scala:224)
spark-hudi-1         | 	at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
spark-hudi-1         | 	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:102)
spark-hudi-1         | 	at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:224)
spark-hudi-1         | 	at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:150)
spark-hudi-1         | 	at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:140)
spark-hudi-1         | 	at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:64)
spark-hudi-1         | 	at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:96)
spark-hudi-1         | 	at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
spark-hudi-1         | 	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
spark-hudi-1         | 	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
spark-hudi-1         | Caused by: java.net.ConnectException: Connection refused (Connection refused)
spark-hudi-1         | 	at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
spark-hudi-1         | 	at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412)
spark-hudi-1         | 	at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255)
spark-hudi-1         | 	at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237)
spark-hudi-1         | 	at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
spark-hudi-1         | 	at java.base/java.net.Socket.connect(Socket.java:615)
spark-hudi-1         | 	at org.apache.thrift.transport.TSocket.open(TSocket.java:221)
spark-hudi-1         | 	... 44 more
spark-hudi-1         | )
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:527)
spark-hudi-1         | 	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:245)
spark-hudi-1         | 	at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:70)
spark-hudi-1         | 	... 41 more
spark-hudi-1         | 7847 [shutdown-hook-0] INFO  org.apache.spark.SparkContext  - Invoking stop() from shutdown hook
spark-hudi-1         | 7860 [shutdown-hook-0] INFO  org.sparkproject.jetty.server.AbstractConnector  - Stopped Spark@262816a8{HTTP/1.1, (http/1.1)}{0.0.0.0:4040}
spark-hudi-1         | 7862 [shutdown-hook-0] INFO  org.apache.spark.ui.SparkUI  - Stopped Spark web UI at http://spark-hudi:4040
spark-hudi-1         | 7883 [dispatcher-event-loop-2] INFO  org.apache.spark.MapOutputTrackerMasterEndpoint  - MapOutputTrackerMasterEndpoint stopped!
spark-hudi-1         | 7916 [shutdown-hook-0] INFO  org.apache.spark.storage.memory.MemoryStore  - MemoryStore cleared
spark-hudi-1         | 7917 [shutdown-hook-0] INFO  org.apache.spark.storage.BlockManager  - BlockManager stopped
spark-hudi-1         | 7925 [shutdown-hook-0] INFO  org.apache.spark.storage.BlockManagerMaster  - BlockManagerMaster stopped
spark-hudi-1         | 7928 [dispatcher-event-loop-6] INFO  org.apache.spark.scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint  - OutputCommitCoordinator stopped!
spark-hudi-1         | 7952 [shutdown-hook-0] INFO  org.apache.spark.SparkContext  - Successfully stopped SparkContext
spark-hudi-1         | 7952 [shutdown-hook-0] INFO  org.apache.spark.util.ShutdownHookManager  - Shutdown hook called
spark-hudi-1         | 7953 [shutdown-hook-0] INFO  org.apache.spark.util.ShutdownHookManager  - Deleting directory /tmp/spark-79a8ef88-0f61-447a-b0a4-e5cd1366d17e
spark-hudi-1         | 7970 [shutdown-hook-0] INFO  org.apache.spark.util.ShutdownHookManager  - Deleting directory /tmp/spark-5e544dea-56f1-4b1f-a562-bdd6da6732d5
hive-metastore-1     | SLF4J: Class path contains multiple SLF4J bindings.
hive-metastore-1     | SLF4J: Found binding in [jar:file:/opt/apache-hive-metastore-3.0.0-bin/lib/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
hive-metastore-1     | SLF4J: Found binding in [jar:file:/opt/hadoop-3.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
hive-metastore-1     | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
hive-metastore-1     | SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
spark-hudi-1 exited with code 1
hive-metastore-1     | 2024-02-02 16:59:29,445 main INFO Log4j appears to be running in a Servlet environment, but there's no log4j-web module available. If you want better web container support, please add the log4j-web JAR to your web archive or server lib directory.

@alberttwong
Copy link
Author

so let's swap out mariadb and HMS to the ones at https://github.com/starburstdata/dbt-trino/blob/master/docker-compose-starburst.yml

atwong@Albert-CelerData trino-hudi-minio % docker-compose up
[+] Running 8/6
 ✔ Network trino-hudi-minio_trino-network                                                                                                                    Created                                                    0.0s
 ✔ Volume "trino-hudi-minio_minio-data"                                                                                                                      Created                                                    0.0s
 ✔ Container trino-hudi-minio-trino-coordinator-1                                                                                                            Create...                                                  0.0s
 ✔ Container trino-hudi-minio-metastore_db-1                                                                                                                 Created                                                    0.0s
 ✔ Container minio                                                                                                                                           Created                                                    0.0s
 ✔ Container trino-hudi-minio-hive-metastore-1                                                                                                               Created                                                    0.0s
 ✔ Container trino-hudi-minio-spark-hudi-1                                                                                                                   Created                                                    0.0s
 ! spark-hudi The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested                                                            0.0s
Attaching to minio, hive-metastore-1, metastore_db-1, spark-hudi-1, trino-coordinator-1
metastore_db-1       | The files belonging to this database system will be owned by user "postgres".
metastore_db-1       | This user must also own the server process.
metastore_db-1       |
metastore_db-1       | The database cluster will be initialized with locale "en_US.utf8".
metastore_db-1       | The default database encoding has accordingly been set to "UTF8".
metastore_db-1       | The default text search configuration will be set to "english".
metastore_db-1       |
metastore_db-1       | Data page checksums are disabled.
metastore_db-1       |
metastore_db-1       | fixing permissions on existing directory /var/lib/postgresql/data ... ok
metastore_db-1       | creating subdirectories ... ok
metastore_db-1       | selecting default max_connections ... 100
metastore_db-1       | selecting default shared_buffers ... 128MB
metastore_db-1       | selecting default timezone ... Etc/UTC
metastore_db-1       | selecting dynamic shared memory implementation ... posix
metastore_db-1       | creating configuration files ... ok
trino-coordinator-1  | + launcher_opts=(--etc-dir /etc/trino)
trino-coordinator-1  | + grep -s -q node.id /etc/trino/node.properties
trino-coordinator-1  | + launcher_opts+=("-Dnode.id=${HOSTNAME}")
trino-coordinator-1  | + exec /usr/lib/trino/bin/launcher run --etc-dir /etc/trino -Dnode.id=trino-coordinator
minio                | WARNING: MINIO_ACCESS_KEY and MINIO_SECRET_KEY are deprecated.
minio                |          Please use MINIO_ROOT_USER and MINIO_ROOT_PASSWORD
metastore_db-1       | running bootstrap script ... ok
hive-metastore-1     | ++ dirname /opt/bin/start-hive-metastore.sh
hive-metastore-1     | + cd /opt/bin
hive-metastore-1     | + SERVICE=metastore
hive-metastore-1     | + getopts :hs: opt
hive-metastore-1     | + test -v HIVE_METASTORE_JDBC_URL
hive-metastore-1     | + test -v HIVE_METASTORE_DRIVER
hive-metastore-1     | + test -v HIVE_METASTORE_USER
hive-metastore-1     | + test -v HIVE_METASTORE_PASSWORD
hive-metastore-1     | + test -v S3_ENDPOINT
hive-metastore-1     | + test -v S3_ACCESS_KEY
hive-metastore-1     | + test -v S3_SECRET_KEY
hive-metastore-1     | + test -v S3_PATH_STYLE_ACCESS
hive-metastore-1     | + test -v REGION
hive-metastore-1     | + test -v GOOGLE_CLOUD_KEY_FILE_PATH
hive-metastore-1     | + test -v AZURE_ADL_CLIENT_ID
hive-metastore-1     | + test -v AZURE_ADL_CREDENTIAL
hive-metastore-1     | + test -v AZURE_ADL_REFRESH_URL
hive-metastore-1     | + test -v AZURE_ABFS_STORAGE_ACCOUNT
hive-metastore-1     | + test -v AZURE_ABFS_ACCESS_KEY
hive-metastore-1     | + test -v AZURE_WASB_STORAGE_ACCOUNT
hive-metastore-1     | + test -v AZURE_ABFS_OAUTH
hive-metastore-1     | + test -v AZURE_ABFS_OAUTH_TOKEN_PROVIDER
hive-metastore-1     | + test -v AZURE_ABFS_OAUTH_CLIENT_ID
hive-metastore-1     | + test -v AZURE_ABFS_OAUTH_SECRET
hive-metastore-1     | + test -v AZURE_ABFS_OAUTH_ENDPOINT
hive-metastore-1     | + test -v AZURE_WASB_ACCESS_KEY
hive-metastore-1     | + test -v HIVE_HOME
hive-metastore-1     | + TMP_DIR=/tmp
hive-metastore-1     | + JDBC_DRIVERS_DIR=/opt/apache-hive-3.1.2-bin/lib/
hive-metastore-1     | + MYSQL_DRIVER_DOWNLOAD_URL=https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-8.0.20.tar.gz
hive-metastore-1     | + set +x
hive-metastore-1     | + [[ org.postgresql.Driver == \o\r\g\.\m\a\r\i\a\d\b\.\j\d\b\c\.\D\r\i\v\e\r ]]
hive-metastore-1     | ++ echo jdbc:postgresql://metastore_db:5432/metastore_db
minio                | Formatting 1st pool, 1 set(s), 1 drives per set.
minio                | WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable.
hive-metastore-1     | ++ sed -e 's|&|\\&amp;|g'
hive-metastore-1     | + export HIVE_METASTORE_JDBC_URL=jdbc:postgresql://metastore_db:5432/metastore_db
hive-metastore-1     | + HIVE_METASTORE_JDBC_URL=jdbc:postgresql://metastore_db:5432/metastore_db
hive-metastore-1     | + export HIVE_METASTORE_WAREHOUSE_DIR=s3://datalake/
hive-metastore-1     | + HIVE_METASTORE_WAREHOUSE_DIR=s3://datalake/
hive-metastore-1     | + export HIVE_METASTORE_STORAGE_AUTHORIZATION=true
hive-metastore-1     | + HIVE_METASTORE_STORAGE_AUTHORIZATION=true
hive-metastore-1     | + export HIVE_METASTORE_USERS_IN_ADMIN_ROLE=admin
hive-metastore-1     | + HIVE_METASTORE_USERS_IN_ADMIN_ROLE=admin
hive-metastore-1     | + set +x
hive-metastore-1     | ++ echo jdbc:postgresql://metastore_db:5432/metastore_db
hive-metastore-1     | ++ cut -d / -f 3
hive-metastore-1     | ++ cut -d : -f 1
hive-metastore-1     | + export HIVE_METASTORE_DB_HOST=metastore_db
hive-metastore-1     | + HIVE_METASTORE_DB_HOST=metastore_db
hive-metastore-1     | ++ echo jdbc:postgresql://metastore_db:5432/metastore_db
hive-metastore-1     | ++ cut -d / -f 3
hive-metastore-1     | ++ cut -d : -f 2
hive-metastore-1     | + export HIVE_METASTORE_DB_PORT=5432
hive-metastore-1     | + HIVE_METASTORE_DB_PORT=5432
hive-metastore-1     | ++ echo jdbc:postgresql://metastore_db:5432/metastore_db
hive-metastore-1     | ++ cut -d / -f 4
hive-metastore-1     | ++ cut -d '?' -f 1
hive-metastore-1     | + export HIVE_METASTORE_DB_NAME=metastore_db
hive-metastore-1     | + HIVE_METASTORE_DB_NAME=metastore_db
hive-metastore-1     | Making sure that Postgres is accessible...
hive-metastore-1     | + [[ org.postgresql.Driver == \o\r\g\.\m\a\r\i\a\d\b\.\j\d\b\c\.\D\r\i\v\e\r ]]
hive-metastore-1     | + [[ org.postgresql.Driver == \c\o\m\.\m\y\s\q\l\.\j\d\b\c\.\D\r\i\v\e\r ]]
hive-metastore-1     | + [[ org.postgresql.Driver == org.postgresql.Driver ]]
hive-metastore-1     | + declare -fxr sql
hive-metastore-1     | + set +x
minio                | MinIO Object Storage Server
minio                | Copyright: 2015-2023 MinIO, Inc.
minio                | License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
minio                | Version: RELEASE.2023-11-20T22-40-07Z (go1.21.4 linux/arm64)
minio                |
minio                | Status:         1 Online, 0 Offline.
minio                | S3-API: http://192.168.112.3:9000  http://127.0.0.1:9000
minio                | Console: http://192.168.112.3:9001 http://127.0.0.1:9001
minio                |
minio                | Documentation: https://min.io/docs/minio/linux/index.html
minio                | Warning: The standard parity is set to 0. This can lead to data loss.
hive-metastore-1     | psql: error: could not connect to server: Connection refused
hive-metastore-1     | 	Is the server running on host "metastore_db" (192.168.112.2) and accepting
hive-metastore-1     | 	TCP/IP connections on port 5432?
trino-coordinator-1  | Unrecognized VM option 'UseBiasedLocking'
trino-coordinator-1  | Error: Could not create the Java Virtual Machine.
trino-coordinator-1  | Error: A fatal exception has occurred. Program will exit.
minio                |
minio                |  You are running an older version of MinIO released 2 months before the latest release
minio                |  Update: Run `mc admin update`
minio                |
minio                |
trino-coordinator-1 exited with code 1
metastore_db-1       | performing post-bootstrap initialization ... ok
metastore_db-1       |
metastore_db-1       | WARNING: enabling "trust" authentication for local connections
metastore_db-1       | You can change this by editing pg_hba.conf or using the option -A, or
metastore_db-1       | --auth-local and --auth-host, the next time you run initdb.
metastore_db-1       | syncing data to disk ... ok
metastore_db-1       |
metastore_db-1       | Success. You can now start the database server using:
metastore_db-1       |
metastore_db-1       |     pg_ctl -D /var/lib/postgresql/data -l logfile start
metastore_db-1       |
metastore_db-1       | waiting for server to start....2024-02-02 21:27:48.694 UTC [48] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
metastore_db-1       | 2024-02-02 21:27:48.700 UTC [49] LOG:  database system was shut down at 2024-02-02 21:27:48 UTC
metastore_db-1       | 2024-02-02 21:27:48.702 UTC [48] LOG:  database system is ready to accept connections
metastore_db-1       |  done
metastore_db-1       | server started
metastore_db-1       | CREATE DATABASE
metastore_db-1       |
metastore_db-1       |
metastore_db-1       | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
metastore_db-1       |
metastore_db-1       | waiting for server to shut down....2024-02-02 21:27:49.073 UTC [48] LOG:  received fast shutdown request
metastore_db-1       | 2024-02-02 21:27:49.074 UTC [48] LOG:  aborting any active transactions
metastore_db-1       | 2024-02-02 21:27:49.075 UTC [48] LOG:  background worker "logical replication launcher" (PID 55) exited with exit code 1
metastore_db-1       | 2024-02-02 21:27:49.075 UTC [50] LOG:  shutting down
metastore_db-1       | 2024-02-02 21:27:49.081 UTC [48] LOG:  database system is shut down
metastore_db-1       |  done
metastore_db-1       | server stopped
metastore_db-1       |
metastore_db-1       | PostgreSQL init process complete; ready for start up.
metastore_db-1       |
metastore_db-1       | 2024-02-02 21:27:49.179 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
metastore_db-1       | 2024-02-02 21:27:49.179 UTC [1] LOG:  listening on IPv6 address "::", port 5432
metastore_db-1       | 2024-02-02 21:27:49.180 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
metastore_db-1       | 2024-02-02 21:27:49.186 UTC [76] LOG:  database system was shut down at 2024-02-02 21:27:49 UTC
metastore_db-1       | 2024-02-02 21:27:49.188 UTC [1] LOG:  database system is ready to accept connections
hive-metastore-1     |  ?column?
hive-metastore-1     | ----------
hive-metastore-1     |         1
hive-metastore-1     | (1 row)
hive-metastore-1     |
metastore_db-1       | 2024-02-02 21:27:49.469 UTC [84] ERROR:  relation "DBS" does not exist at character 15
metastore_db-1       | 2024-02-02 21:27:49.469 UTC [84] STATEMENT:  SELECT 1 FROM "DBS" LIMIT 1
hive-metastore-1     | ERROR:  relation "DBS" does not exist
hive-metastore-1     | LINE 1: SELECT 1 FROM "DBS" LIMIT 1
hive-metastore-1     |                       ^
hive-metastore-1     | INIT_MSG: Setting Environment variables for Hive.
hive-metastore-1     | SLF4J: Class path contains multiple SLF4J bindings.
hive-metastore-1     | SLF4J: Found binding in [jar:file:/opt/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.17.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
hive-metastore-1     | SLF4J: Found binding in [jar:file:/opt/hadoop-3.3.1/share/hadoop/common/lib/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
hive-metastore-1     | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
hive-metastore-1     | SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
hive-metastore-1     | Metastore connection URL:	 jdbc:postgresql://metastore_db:5432/metastore_db
hive-metastore-1     | Metastore Connection Driver :	 org.postgresql.Driver
hive-metastore-1     | Metastore connection User:	 admin
hive-metastore-1     | Starting metastore schema initialization to 3.1.0
hive-metastore-1     | Initialization script hive-schema-3.1.0.postgres.sql
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
spark-hudi-1         | WARNING: An illegal reflective access operation has occurred
spark-hudi-1         | WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/spark-3.2.1-bin-hadoop3.2/jars/spark-unsafe_2.12-3.2.1.jar) to constructor java.nio.DirectByteBuffer(long,int)
spark-hudi-1         | WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
spark-hudi-1         | WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
spark-hudi-1         | WARNING: All illegal access operations will be denied in a future release
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
 ive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     |
hive-metastore-1     | Initialization script completed
hive-metastore-1     | schemaTool completed
hive-metastore-1     | + [[ -n '' ]]
hive-metastore-1     | + exec hive --service metastore --hiveconf hive.root.logger=INFO,console --hiveconf hive.log.threshold=INFO
hive-metastore-1     | INIT_MSG: Setting Environment variables for Hive.
hive-metastore-1     | INIT_MSG: Setting Environment variables for Hive service: metastore
hive-metastore-1     | INIT_MSG: Setting heap memory size for Hive service: metastore to: 512 MB
hive-metastore-1     | 2024-02-02 21:27:51: Starting Hive Metastore Server
spark-hudi-1         | 0    [main] INFO  org.apache.spark.sql.hive.thriftserver.HiveThriftServer2  - Started daemon with process name: 1@spark-hudi
spark-hudi-1         | 5    [main] INFO  org.apache.spark.util.SignalUtils  - Registering signal handler for TERM
spark-hudi-1         | 6    [main] INFO  org.apache.spark.util.SignalUtils  - Registering signal handler for HUP
spark-hudi-1         | 6    [main] INFO  org.apache.spark.util.SignalUtils  - Registering signal handler for INT
spark-hudi-1         | 10   [main] INFO  org.apache.spark.sql.hive.thriftserver.HiveThriftServer2  - Starting SparkContext
hive-metastore-1     | SLF4J: Class path contains multiple SLF4J bindings.
hive-metastore-1     | SLF4J: Found binding in [jar:file:/opt/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.17.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
hive-metastore-1     | SLF4J: Found binding in [jar:file:/opt/hadoop-3.3.1/share/hadoop/common/lib/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
hive-metastore-1     | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
spark-hudi-1         | 179  [main] INFO  org.apache.hadoop.hive.conf.HiveConf  - Found configuration file null
hive-metastore-1     | SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
spark-hudi-1         | 304  [main] INFO  org.apache.spark.SparkContext  - Running Spark version 3.2.1
spark-hudi-1         | 389  [main] WARN  org.apache.hadoop.util.NativeCodeLoader  - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hive-metastore-1     | 2024-02-02T21:27:51,841  INFO [main] metastore.HiveMetaStore: STARTUP_MSG:
hive-metastore-1     | /************************************************************
hive-metastore-1     | STARTUP_MSG: Starting HiveMetaStore
hive-metastore-1     | STARTUP_MSG:   host = hive-metastore/192.168.112.5
hive-metastore-1     | STARTUP_MSG:   args = [--hiveconf, hive.root.logger=INFO,console, --hiveconf, hive.log.threshold=INFO]
hive-metastore-1     | STARTUP_MSG:   version = 3.1.2
hive-metastore-1     | STARTUP_MSG:   classpath = /opt/apache-hive-3.1.2-bin/conf:/opt/apache-hive-3.1.2-bin/lib/HikariCP-2.6.1.jar:/opt/apache-hive-3.1.2-bin/lib/ST4-4.0.4.jar:/opt/apache-hive-3.1.2-bin/lib/accumulo-core-1.7.3.jar:/opt/apache-hive-3.1.2-bin/lib/accumulo-fate-1.7.3.jar:/opt/apache-hive-3.1.2-bin/lib/accumulo-start-1.7.3.jar:/opt/apache-hive-3.1.2-bin/lib/accumulo-trace-1.7.3.jar:/opt/apache-hive-3.1.2-bin/lib/aircompressor-0.10.jar:/opt/apache-hive-3.1.2-bin/lib/ant-1.9.1.jar:/opt/apache-hive-3.1.2-bin/lib/ant-launcher-1.9.1.jar:/opt/apache-hive-3.1.2-bin/lib/antlr-runtime-3.5.2.jar:/opt/apache-hive-3.1.2-bin/lib/antlr4-runtime-4.5.jar:/opt/apache-hive-3.1.2-bin/lib/aopalliance-repackaged-2.5.0-b32.jar:/opt/apache-hive-3.1.2-bin/lib/apache-jsp-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/apache-jstl-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/arrow-format-0.8.0.jar:/opt/apache-hive-3.1.2-bin/lib/arrow-memory-0.8.0.jar:/opt/apache-hive-3.1.2-bin/lib/arrow-vector-0.8.0.jar:/opt/apache-hive-3.1.2-bin/lib/asm-5.0.1.jar:/opt/apache-hive-3.1.2-bin/lib/asm-commons-5.0.1.jar:/opt/apache-hive-3.1.2-bin/lib/asm-tree-5.0.1.jar:/opt/apache-hive-3.1.2-bin/lib/audience-annotations-0.5.0.jar:/opt/apache-hive-3.1.2-bin/lib/avatica-1.11.0.jar:/opt/apache-hive-3.1.2-bin/lib/avro-1.7.7.jar:/opt/apache-hive-3.1.2-bin/lib/bonecp-0.8.0.RELEASE.jar:/opt/apache-hive-3.1.2-bin/lib/calcite-core-1.16.0.jar:/opt/apache-hive-3.1.2-bin/lib/calcite-druid-1.16.0.jar:/opt/apache-hive-3.1.2-bin/lib/calcite-linq4j-1.16.0.jar:/opt/apache-hive-3.1.2-bin/lib/commons-cli-1.2.jar:/opt/apache-hive-3.1.2-bin/lib/commons-codec-1.7.jar:/opt/apache-hive-3.1.2-bin/lib/commons-collections4-4.1.jar:/opt/apache-hive-3.1.2-bin/lib/commons-compiler-2.7.6.jar:/opt/apache-hive-3.1.2-bin/lib/commons-compress-1.9.jar:/opt/apache-hive-3.1.2-bin/lib/commons-crypto-1.0.0.jar:/opt/apache-hive-3.1.2-bin/lib/commons-dbcp-1.4.jar:/opt/apache-hive-3.1.2-bin/lib/commons-io-2.4.jar:/opt/apache-hive-3.1.2-bin/lib/commons-lang-2.6.jar:/opt/apache-hive-3.1.2-bin/lib/commons-lang3-3.2.jar:/opt/apache-hive-3.1.2-bin/lib/commons-logging-1.0.4.jar:/opt/apache-hive-3.1.2-bin/lib/commons-math-2.1.jar:/opt/apache-hive-3.1.2-bin/lib/commons-math3-3.6.1.jar:/opt/apache-hive-3.1.2-bin/lib/commons-pool-1.5.4.jar:/opt/apache-hive-3.1.2-bin/lib/commons-vfs2-2.1.jar:/opt/apache-hive-3.1.2-bin/lib/curator-client-2.12.0.jar:/opt/apache-hive-3.1.2-bin/lib/curator-framework-2.12.0.jar:/opt/apache-hive-3.1.2-bin/lib/curator-recipes-2.12.0.jar:/opt/apache-hive-3.1.2-bin/lib/datanucleus-api-jdo-4.2.4.jar:/opt/apache-hive-3.1.2-bin/lib/datanucleus-core-4.1.17.jar:/opt/apache-hive-3.1.2-bin/lib/datanucleus-rdbms-4.1.19.jar:/opt/apache-hive-3.1.2-bin/lib/derby-10.14.1.0.jar:/opt/apache-hive-3.1.2-bin/lib/disruptor-3.3.6.jar:/opt/apache-hive-3.1.2-bin/lib/dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/druid-hdfs-storage-0.12.0.jar:/opt/apache-hive-3.1.2-bin/lib/ecj-4.4.2.jar:/opt/apache-hive-3.1.2-bin/lib/esri-geometry-api-2.0.0.jar:/opt/apache-hive-3.1.2-bin/lib/findbugs-annotations-1.3.9-1.jar:/opt/apache-hive-3.1.2-bin/lib/flatbuffers-1.2.0-3f79e055.jar:/opt/apache-hive-3.1.2-bin/lib/groovy-all-2.4.11.jar:/opt/apache-hive-3.1.2-bin/lib/gson-2.2.4.jar:/opt/apache-hive-3.1.2-bin/lib/guava-27.0-jre.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-client-2.0.0-alpha4.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-common-2.0.0-alpha4-tests.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-common-2.0.0-alpha4.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-hadoop-compat-2.0.0-alpha4.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-hadoop2-compat-2.0.0-alpha4-tests.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-hadoop2-compat-2.0.0-alpha4.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-http-2.0.0-alpha4.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-mapreduce-2.0.0-alpha4.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-metrics-2.0.0-alpha4.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-metrics-api-2.0.0-alpha4.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-prefix-tree-2.0.0-alpha4.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-procedure-2.0.0-alpha4.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-protocol-2.0.0-alpha4.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-protocol-shaded-2.0.0-alpha4.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-replication-2.0.0-alpha4.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-server-2.0.0-alpha4.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-shaded-miscellaneous-1.0.1.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-shaded-netty-1.0.1.jar:/opt/apache-hive-3.1.2-bin/lib/hbase-shaded-protobuf-1.0.1.jar:/opt/apache-hive-3.1.2-bin/lib/hive-accumulo-handler-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-beeline-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-classification-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-cli-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-contrib-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-druid-handler-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-exec-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-hbase-handler-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-hcatalog-core-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-hcatalog-server-extensions-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-hplsql-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-jdbc-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-jdbc-handler-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-kryo-registrator-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-llap-client-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-llap-common-3.1.2-tests.jar:/opt/apache-hive-3.1.2-bin/lib/hive-llap-common-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-llap-ext-client-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-llap-server-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-llap-tez-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-metastore-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-serde-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-service-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-service-rpc-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-shims-0.23-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-shims-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-shims-common-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-shims-scheduler-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-standalone-metastore-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-storage-api-2.7.0.jar:/opt/apache-hive-3.1.2-bin/lib/hive-streaming-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-testutils-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-upgrade-acid-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hive-vector-code-gen-3.1.2.jar:/opt/apache-hive-3.1.2-bin/lib/hk2-api-2.5.0-b32.jar:/opt/apache-hive-3.1.2-bin/lib/hk2-locator-2.5.0-b32.jar:/opt/apache-hive-3.1.2-bin/lib/hk2-utils-2.5.0-b32.jar:/opt/apache-hive-3.1.2-bin/lib/hppc-0.7.2.jar:/opt/apache-hive-3.1.2-bin/lib/htrace-core-3.2.0-incubating.jar:/opt/apache-hive-3.1.2-bin/lib/httpclient-4.5.2.jar:/opt/apache-hive-3.1.2-bin/lib/httpcore-4.4.4.jar:/opt/apache-hive-3.1.2-bin/lib/ivy-2.4.0.jar:/opt/apache-hive-3.1.2-bin/lib/jackson-annotations-2.9.5.jar:/opt/apache-hive-3.1.2-bin/lib/jackson-core-2.9.5.jar:/opt/apache-hive-3.1.2-bin/lib/jackson-core-asl-1.9.13.jar:/opt/apache-hive-3.1.2-bin/lib/jackson-databind-2.9.5.jar:/opt/apache-hive-3.1.2-bin/lib/jackson-dataformat-smile-2.9.5.jar:/opt/apache-hive-3.1.2-bin/lib/jackson-mapper-asl-1.9.13.jar:/opt/apache-hive-3.1.2-bin/lib/jamon-runtime-2.3.1.jar:/opt/apache-hive-3.1.2-bin/lib/janino-2.7.6.jar:/opt/apache-hive-3.1.2-bin/lib/javassist-3.20.0-GA.jar:/opt/apache-hive-3.1.2-bin/lib/javax.annotation-api-1.2.jar:/opt/apache-hive-3.1.2-bin/lib/javax.inject-2.5.0-b32.jar:/opt/apache-hive-3.1.2-bin/lib/javax.jdo-3.2.0-m3.jar:/opt/apache-hive-3.1.2-bin/lib/javax.servlet-api-3.1.0.jar:/opt/apache-hive-3.1.2-bin/lib/javax.servlet.jsp-2.3.2.jar:/opt/apache-hive-3.1.2-bin/lib/javax.servlet.jsp-api-2.3.1.jar:/opt/apache-hive-3.1.2-bin/lib/javax.ws.rs-api-2.0.1.jar:/opt/apache-hive-3.1.2-bin/lib/javolution-5.5.1.jar:/opt/apache-hive-3.1.2-bin/lib/jaxb-api-2.2.11.jar:/opt/apache-hive-3.1.2-bin/lib/jcodings-1.0.18.jar:/opt/apache-hive-3.1.2-bin/lib/jcommander-1.32.jar:/opt/apache-hive-3.1.2-bin/lib/jdo-api-3.0.1.jar:/opt/apache-hive-3.1.2-bin/lib/jersey-client-2.25.1.jar:/opt/apache-hive-3.1.2-bin/lib/jersey-common-2.25.1.jar:/opt/apache-hive-3.1.2-bin/lib/jersey-container-servlet-core-2.25.1.jar:/opt/apache-hive-3.1.2-bin/lib/jersey-guava-2.25.1.jar:/opt/apache-hive-3.1.2-bin/lib/jersey-media-jaxb-2.25.1.jar:/opt/apache-hive-3.1.2-bin/lib/jersey-server-2.25.1.jar:/opt/apache-hive-3.1.2-bin/lib/jettison-1.1.jar:/opt/apache-hive-3.1.2-bin/lib/jetty-annotations-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/jetty-client-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/jetty-http-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/jetty-io-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/jetty-jaas-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/jetty-jndi-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/jetty-plus-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/jetty-rewrite-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/jetty-runner-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/jetty-schemas-3.1.jar:/opt/apache-hive-3.1.2-bin/lib/jetty-security-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/jetty-server-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/jetty-servlet-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/jetty-util-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/jetty-webapp-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/jetty-xml-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/jline-2.12.jar:/opt/apache-hive-3.1.2-bin/lib/joda-time-2.9.9.jar:/opt/apache-hive-3.1.2-bin/lib/joni-2.1.11.jar:/opt/apache-hive-3.1.2-bin/lib/jpam-1.1.jar:/opt/apache-hive-3.1.2-bin/lib/json-1.8.jar:/opt/apache-hive-3.1.2-bin/lib/jsr305-3.0.0.jar:/opt/apache-hive-3.1.2-bin/lib/jta-1.1.jar:/opt/apache-hive-3.1.2-bin/lib/libfb303-0.9.3.jar:/opt/apache-hive-3.1.2-bin/lib/libthrift-0.9.3.jar:/opt/apache-hive-3.1.2-bin/lib/mariadb-java-client-2.6.0.jar:/opt/apache-hive-3.1.2-bin/lib/memory-0.9.0.jar:/opt/apache-hive-3.1.2-bin/lib/metrics-core-3.1.0.jar:/opt/apache-hive-3.1.2-bin/lib/metrics-json-3.1.0.jar:/opt/apache-hive-3.1.2-bin/lib/metrics-jvm-3.1.0.jar:/opt/apache-hive-3.1.2-bin/lib/mysql-metadata-storage-0.12.0.jar:/opt/apache-hive-3.1.2-bin/lib/netty-3.10.5.Final.jar:/opt/apache-hive-3.1.2-bin/lib/netty-all-4.1.17.Final.jar:/opt/apache-hive-3.1.2-bin/lib/netty-buffer-4.1.17.Final.jar:/opt/apache-hive-3.1.2-bin/lib/netty-common-4.1.17.Final.jar:/opt/apache-hive-3.1.2-bin/lib/opencsv-2.3.jar:/opt/apache-hive-3.1.2-bin/lib/orc-core-1.5.6.jar:/opt/apache-hive-3.1.2-bin/lib/orc-shims-1.5.6.jar:/opt/apache-hive-3.1.2-bin/lib/org.abego.treelayout.core-1.0.1.jar:/opt/apache-hive-3.1.2-bin/lib/osgi-resource-locator-1.0.1.jar:/opt/apache-hive-3.1.2-bin/lib/paranamer-2.3.jar:/opt/apache-hive-3.1.2-bin/lib/parquet-hadoop-bundle-1.10.0.jar:/opt/apache-hive-3.1.2-bin/lib/postgresql-jdbc.jar:/opt/apache-hive-3.1.2-bin/lib/postgresql-metadata-storage-0.12.0.jar:/opt/apache-hive-3.1.2-bin/lib/protobuf-java-2.5.0.jar:/opt/apache-hive-3.1.2-bin/lib/sketches-core-0.9.0.jar:/opt/apache-hive-3.1.2-bin/lib/snappy-java-1.1.4.jar:/opt/apache-hive-3.1.2-bin/lib/sqlline-1.3.0.jar:/opt/apache-hive-3.1.2-bin/lib/stax-api-1.0.1.jar:/opt/apache-hive-3.1.2-bin/lib/super-csv-2.2.0.jar:/opt/apache-hive-3.1.2-bin/lib/taglibs-standard-impl-1.2.5.jar:/opt/apache-hive-3.1.2-bin/lib/taglibs-standard-spec-1.2.5.jar:/opt/apache-hive-3.1.2-bin/lib/tempus-fugit-1.1.jar:/opt/apache-hive-3.1.2-bin/lib/transaction-api-1.1.jar:/opt/apache-hive-3.1.2-bin/lib/validation-api-1.1.0.Final.jar:/opt/apache-hive-3.1.2-bin/lib/velocity-1.5.jar:/opt/apache-hive-3.1.2-bin/lib/websocket-api-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/websocket-client-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/websocket-common-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/websocket-server-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/websocket-servlet-9.3.20.v20170531.jar:/opt/apache-hive-3.1.2-bin/lib/zookeeper-3.4.6.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/aliyun-java-sdk-core-3.4.0.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/aliyun-java-sdk-ecs-4.2.0.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/aliyun-java-sdk-ram-3.0.0.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/aliyun-java-sdk-sts-3.0.0.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/aliyun-sdk-oss-3.4.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/aws-java-sdk-bundle-1.11.343.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/azure-keyvault-core-1.0.0.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/azure-storage-7.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/gcs-connector-hadoop2-latest.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-aliyun-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-archive-logs-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-archives-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-aws-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-azure-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-client-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-datajoin-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-dynamometer-blockgen-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-dynamometer-infra-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-dynamometer-workload-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-extras-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-fs2img-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-gridmix-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-kafka-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-minicluster-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-openstack-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-resourceestimator-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-rumen-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-sls-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-streaming-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hamcrest-core-1.3.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/jdk.tools-1.8.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/jdom-1.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/junit-4.13.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/kafka-clients-2.4.0.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/lz4-java-1.7.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/ojalgo-43.0.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/zstd-jni-1.4.3-1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-distcp-3.3.1.jar:/opt/apache-hive-3.1.2-bin/lib/log4j-1.2-api-2.17.2.jar:/opt/apache-hive-3.1.2-bin/lib/log4j-api-2.17.2.jar:/opt/apache-hive-3.1.2-bin/lib/log4j-core-2.17.2.jar:/opt/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.17.2.jar:/opt/apache-hive-3.1.2-bin/lib/log4j-web-2.17.2.jar:/opt/hadoop-3.3.1/etc/hadoop:/opt/hadoop-3.3.1/share/hadoop/common/lib/kerb-util-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jsch-0.1.55.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/json-smart-2.4.2.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/hadoop-shaded-guava-1.1.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jetty-io-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/netty-3.10.6.Final.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/commons-codec-1.11.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/kerb-client-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/hadoop-annotations-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jetty-security-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/avro-1.7.7.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/failureaccess-1.0.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jetty-http-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jetty-server-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jetty-util-ajax-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/commons-text-1.4.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/kerby-util-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/woodstox-core-5.3.0.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/curator-recipes-4.2.0.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/kerb-core-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/hadoop-shaded-protobuf_3_7-1.1.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/metrics-core-3.2.4.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jackson-annotations-2.10.5.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/stax2-api-4.2.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/kerb-server-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jsr305-3.0.2.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jul-to-slf4j-1.7.30.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/zookeeper-jute-3.5.6.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/commons-compress-1.19.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/commons-daemon-1.0.13.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/kerby-config-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/slf4j-api-1.7.30.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/slf4j-log4j12-1.7.30.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/accessors-smart-2.4.2.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/re2j-1.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/commons-io-2.8.0.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/guava-27.0-jre.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/commons-net-3.6.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jackson-databind-2.10.5.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/token-provider-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/snappy-java-1.1.8.2.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/curator-framework-4.2.0.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/httpclient-4.5.13.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/checker-qual-2.5.2.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/dnsjava-2.1.7.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jakarta.activation-api-1.2.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/commons-lang3-3.7.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jetty-webapp-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jetty-servlet-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/httpcore-4.4.13.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jetty-util-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/asm-5.0.4.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/hadoop-auth-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jersey-servlet-1.19.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/curator-client-4.2.0.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/nimbus-jose-jwt-9.8.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/kerb-common-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jetty-xml-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jersey-server-1.19.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jackson-core-2.10.5.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jersey-json-1.19.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/jersey-core-1.19.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/zookeeper-3.5.6.jar:/opt/hadoop-3.3.1/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/hadoop-registry-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/hadoop-common-3.3.1-tests.jar:/opt/hadoop-3.3.1/share/hadoop/common/hadoop-common-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/hadoop-nfs-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/common/hadoop-kms-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/azure-data-lake-store-sdk-2.3.9.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/wildfly-openssl-1.0.7.Final.jar:/opt/hadoop-3.3.1/share/hadoop/tools/lib/hadoop-azure-datalake-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/json-smart-2.4.2.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/hadoop-shaded-guava-1.1.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/okio-1.6.0.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-io-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/netty-3.10.6.Final.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/hadoop-annotations-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-security-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/avro-1.7.7.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-http-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-server-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-text-1.4.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/woodstox-core-5.3.0.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/curator-recipes-4.2.0.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_7-1.1.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-annotations-2.10.5.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/stax2-api-4.2.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/zookeeper-jute-3.5.6.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/gson-2.2.4.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-compress-1.19.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/paranamer-2.3.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/accessors-smart-2.4.2.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/re2j-1.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-io-2.8.0.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-net-3.6.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-databind-2.10.5.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/snappy-java-1.1.8.2.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/curator-framework-4.2.0.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/httpclient-4.5.13.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/dnsjava-2.1.7.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jakarta.activation-api-1.2.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-lang3-3.7.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-webapp-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-servlet-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-util-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jettison-1.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/asm-5.0.4.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/hadoop-auth-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/curator-client-4.2.0.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.8.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-xml-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-core-2.10.5.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/netty-all-4.1.61.Final.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/zookeeper-3.5.6.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-3.3.1-tests.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-nfs-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-client-3.3.1-tests.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-client-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-native-client-3.3.1-tests.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-native-client-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-rbf-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-rbf-3.3.1-tests.jar:/opt/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.1-tests.jar:/opt/hadoop-3.3.1/share/hadoop/yarn:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/jetty-plus-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/json-io-2.5.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/objenesis-2.6.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-common-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/fst-2.50.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/java-util-1.9.0.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/jetty-client-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/jackson-jaxrs-base-2.10.5.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/jetty-jndi-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/javax-websocket-server-impl-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/snakeyaml-1.26.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-server-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/bcprov-jdk15on-1.60.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-servlet-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/asm-commons-9.0.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/jersey-client-1.19.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/jetty-annotations-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/asm-tree-9.0.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-api-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/jakarta.xml.bind-api-2.3.2.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.10.5.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-client-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/javax.websocket-client-api-1.0.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/bcpkix-jdk15on-1.60.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/jna-5.2.0.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/guice-4.0.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.10.5.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/asm-analysis-9.0.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/javax.websocket-api-1.0.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/javax-websocket-client-impl-9.4.40.v20210413.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/lib/jline-3.9.0.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-router-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-client-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-registry-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-common-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-services-core-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-tests-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-common-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-applications-mawo-core-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-api-3.3.1.jar:/opt/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-services-api-3.3.1.jar
hive-metastore-1     | STARTUP_MSG:   build = git://HW13934/Users/gates/tmp/hive-branch-3.1/hive -r 8190d2be7b7165effa62bd21b7d60ef81fb0e4af; compiled by 'gates' on Thu Aug 22 15:01:39 PDT 2019
hive-metastore-1     | ************************************************************/
hive-metastore-1     | 2024-02-02T21:27:51,864  INFO [main] metastore.HiveMetaStore: Starting hive metastore on port 9083
spark-hudi-1         | 578  [main] INFO  org.apache.spark.resource.ResourceUtils  - ==============================================================
spark-hudi-1         | 579  [main] INFO  org.apache.spark.resource.ResourceUtils  - No custom resources configured for spark.driver.
spark-hudi-1         | 579  [main] INFO  org.apache.spark.resource.ResourceUtils  - ==============================================================
spark-hudi-1         | 580  [main] INFO  org.apache.spark.SparkContext  - Submitted application: Thrift JDBC/ODBC Server
spark-hudi-1         | 606  [main] INFO  org.apache.spark.resource.ResourceProfile  - Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
spark-hudi-1         | 617  [main] INFO  org.apache.spark.resource.ResourceProfile  - Limiting resource is cpu
hive-metastore-1     | 2024-02-02T21:27:51,940  INFO [main] metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
spark-hudi-1         | 617  [main] INFO  org.apache.spark.resource.ResourceProfileManager  - Added ResourceProfile id: 0
hive-metastore-1     | 2024-02-02T21:27:51,959  WARN [main] metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
hive-metastore-1     | 2024-02-02T21:27:51,962  INFO [main] metastore.ObjectStore: ObjectStore, initialize called
hive-metastore-1     | 2024-02-02T21:27:51,963  INFO [main] conf.MetastoreConf: Unable to find config file hivemetastore-site.xml
hive-metastore-1     | 2024-02-02T21:27:51,963  INFO [main] conf.MetastoreConf: Found configuration file null
hive-metastore-1     | 2024-02-02T21:27:51,963  INFO [main] conf.MetastoreConf: Unable to find config file metastore-site.xml
hive-metastore-1     | 2024-02-02T21:27:51,963  INFO [main] conf.MetastoreConf: Found configuration file null
spark-hudi-1         | 670  [main] INFO  org.apache.spark.SecurityManager  - Changing view acls to: root
spark-hudi-1         | 670  [main] INFO  org.apache.spark.SecurityManager  - Changing modify acls to: root
spark-hudi-1         | 670  [main] INFO  org.apache.spark.SecurityManager  - Changing view acls groups to:
spark-hudi-1         | 671  [main] INFO  org.apache.spark.SecurityManager  - Changing modify acls groups to:
spark-hudi-1         | 671  [main] INFO  org.apache.spark.SecurityManager  - SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
hive-metastore-1     | 2024-02-02T21:27:52,149  INFO [main] hikari.HikariDataSource: HikariPool-1 - Starting...
hive-metastore-1     | 2024-02-02T21:27:52,190  INFO [main] hikari.HikariDataSource: HikariPool-1 - Start completed.
hive-metastore-1     | 2024-02-02T21:27:52,203  INFO [main] hikari.HikariDataSource: HikariPool-2 - Starting...
hive-metastore-1     | 2024-02-02T21:27:52,206  INFO [main] hikari.HikariDataSource: HikariPool-2 - Start completed.
hive-metastore-1     | 2024-02-02T21:27:52,236  INFO [main] metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
hive-metastore-1     | 2024-02-02T21:27:52,269  INFO [main] metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is POSTGRES
hive-metastore-1     | 2024-02-02T21:27:52,270  INFO [main] metastore.ObjectStore: Initialized ObjectStore
spark-hudi-1         | 1057 [main] INFO  org.apache.spark.util.Utils  - Successfully started service 'sparkDriver' on port 36607.
spark-hudi-1         | 1096 [main] INFO  org.apache.spark.SparkEnv  - Registering MapOutputTracker
spark-hudi-1         | 1133 [main] INFO  org.apache.spark.SparkEnv  - Registering BlockManagerMaster
spark-hudi-1         | 1148 [main] INFO  org.apache.spark.storage.BlockManagerMasterEndpoint  - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
spark-hudi-1         | 1149 [main] INFO  org.apache.spark.storage.BlockManagerMasterEndpoint  - BlockManagerMasterEndpoint up
spark-hudi-1         | 1154 [main] INFO  org.apache.spark.SparkEnv  - Registering BlockManagerMasterHeartbeat
spark-hudi-1         | 1180 [main] INFO  org.apache.spark.storage.DiskBlockManager  - Created local directory at /tmp/blockmgr-11eab2a3-356f-402e-91af-88939583dd52
spark-hudi-1         | 1220 [main] INFO  org.apache.spark.storage.memory.MemoryStore  - MemoryStore started with capacity 434.4 MiB
spark-hudi-1         | 1247 [main] INFO  org.apache.spark.SparkEnv  - Registering OutputCommitCoordinator
spark-hudi-1         | 1353 [main] INFO  org.sparkproject.jetty.util.log  - Logging initialized @3521ms to org.sparkproject.jetty.util.log.Slf4jLog
spark-hudi-1         | 1423 [main] INFO  org.sparkproject.jetty.server.Server  - jetty-9.4.43.v20210629; built: 2021-06-30T11:07:22.254Z; git: 526006ecfa3af7f1a27ef3a288e2bef7ea9dd7e8; jvm 11.0.16.1+1-LTS
spark-hudi-1         | 1449 [main] INFO  org.sparkproject.jetty.server.Server  - Started @3619ms
spark-hudi-1         | 1488 [main] INFO  org.sparkproject.jetty.server.AbstractConnector  - Started ServerConnector@107909fa{HTTP/1.1, (http/1.1)}{0.0.0.0:4040}
spark-hudi-1         | 1488 [main] INFO  org.apache.spark.util.Utils  - Successfully started service 'SparkUI' on port 4040.
spark-hudi-1         | 1539 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@a9f023e{/jobs,null,AVAILABLE,@Spark}
spark-hudi-1         | 1542 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@36681447{/jobs/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1543 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@726aa968{/jobs/job,null,AVAILABLE,@Spark}
spark-hudi-1         | 1548 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@328d044f{/jobs/job/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1549 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@4745e9c{/stages,null,AVAILABLE,@Spark}
spark-hudi-1         | 1550 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@75de29c0{/stages/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1551 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@296e281a{/stages/stage,null,AVAILABLE,@Spark}
spark-hudi-1         | 1553 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@6b350309{/stages/stage/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1554 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@588f63c{/stages/pool,null,AVAILABLE,@Spark}
spark-hudi-1         | 1557 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@1981d861{/stages/pool/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1558 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@53f4c1e6{/storage,null,AVAILABLE,@Spark}
spark-hudi-1         | 1561 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@6342d610{/storage/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1562 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@784abd3e{/storage/rdd,null,AVAILABLE,@Spark}
spark-hudi-1         | 1563 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@434514d8{/storage/rdd/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1563 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@4613311f{/environment,null,AVAILABLE,@Spark}
spark-hudi-1         | 1564 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@ec8f4b9{/environment/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1565 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@5484117b{/executors,null,AVAILABLE,@Spark}
spark-hudi-1         | 1565 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@7efb53af{/executors/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1566 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@3dfa819{/executors/threadDump,null,AVAILABLE,@Spark}
spark-hudi-1         | 1567 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@68ab0936{/executors/threadDump/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 1581 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@42b84286{/static,null,AVAILABLE,@Spark}
spark-hudi-1         | 1583 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@350d3f4d{/,null,AVAILABLE,@Spark}
spark-hudi-1         | 1584 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@73844119{/api,null,AVAILABLE,@Spark}
spark-hudi-1         | 1586 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@411c6d44{/jobs/job/kill,null,AVAILABLE,@Spark}
spark-hudi-1         | 1587 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@748d2277{/stages/stage/kill,null,AVAILABLE,@Spark}
spark-hudi-1         | 1592 [main] INFO  org.apache.spark.ui.SparkUI  - Bound SparkUI to 0.0.0.0, and started at http://spark-hudi:4040
spark-hudi-1         | 1802 [main] INFO  org.apache.spark.executor.Executor  - Starting executor ID driver on host spark-hudi
spark-hudi-1         | 1839 [main] INFO  org.apache.spark.util.Utils  - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 39405.
spark-hudi-1         | 1839 [main] INFO  org.apache.spark.network.netty.NettyBlockTransferService  - Server created on spark-hudi:39405
spark-hudi-1         | 1841 [main] INFO  org.apache.spark.storage.BlockManager  - Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
spark-hudi-1         | 1846 [main] INFO  org.apache.spark.storage.BlockManagerMaster  - Registering BlockManager BlockManagerId(driver, spark-hudi, 39405, None)
spark-hudi-1         | 1851 [dispatcher-BlockManagerMaster] INFO  org.apache.spark.storage.BlockManagerMasterEndpoint  - Registering block manager spark-hudi:39405 with 434.4 MiB RAM, BlockManagerId(driver, spark-hudi, 39405, None)
spark-hudi-1         | 1861 [main] INFO  org.apache.spark.storage.BlockManagerMaster  - Registered BlockManager BlockManagerId(driver, spark-hudi, 39405, None)
spark-hudi-1         | 1862 [main] INFO  org.apache.spark.storage.BlockManager  - Initialized BlockManager: BlockManagerId(driver, spark-hudi, 39405, None)
hive-metastore-1     | 2024-02-02T21:27:53,305  INFO [main] metastore.HiveMetaStore: Setting location of default catalog, as it hasn't been done after upgrade
spark-hudi-1         | 2046 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@6870c3c2{/metrics/json,null,AVAILABLE,@Spark}
hive-metastore-1     | 2024-02-02T21:27:53,542  INFO [main] impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties
hive-metastore-1     | 2024-02-02T21:27:53,552  INFO [main] impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
hive-metastore-1     | 2024-02-02T21:27:53,552  INFO [main] impl.MetricsSystemImpl: s3a-file-system metrics system started
spark-hudi-1         | 2304 [main] INFO  org.apache.spark.sql.internal.SharedState  - Setting hive.metastore.warehouse.dir ('hdfs://hadoop-master:9000/user/hive/warehouse') to the value of spark.sql.warehouse.dir.
spark-hudi-1         | 2372 [main] WARN  org.apache.hadoop.fs.FileSystem  - Failed to initialize fileystem hdfs://hadoop-master:9000/user/hive/warehouse: java.lang.IllegalArgumentException: java.net.UnknownHostException: hadoop-master
spark-hudi-1         | 2374 [main] WARN  org.apache.spark.sql.internal.SharedState  - Cannot qualify the warehouse path, leaving it unqualified.
spark-hudi-1         | java.lang.IllegalArgumentException: java.net.UnknownHostException: hadoop-master
spark-hudi-1         | 	at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:466)
spark-hudi-1         | 	at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:134)
spark-hudi-1         | 	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:374)
spark-hudi-1         | 	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:308)
spark-hudi-1         | 	at org.apache.hadoop.hdfs.DistributedFileSystem.initDFSClient(DistributedFileSystem.java:201)
spark-hudi-1         | 	at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:186)
spark-hudi-1         | 	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
spark-hudi-1         | 	at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
spark-hudi-1         | 	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
spark-hudi-1         | 	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
spark-hudi-1         | 	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
spark-hudi-1         | 	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
spark-hudi-1         | 	at org.apache.spark.sql.internal.SharedState$.qualifyWarehousePath(SharedState.scala:282)
spark-hudi-1         | 	at org.apache.spark.sql.internal.SharedState.liftedTree1$1(SharedState.scala:80)
spark-hudi-1         | 	at org.apache.spark.sql.internal.SharedState.<init>(SharedState.scala:79)
spark-hudi-1         | 	at org.apache.spark.sql.SparkSession.$anonfun$sharedState$1(SparkSession.scala:139)
spark-hudi-1         | 	at scala.Option.getOrElse(Option.scala:189)
spark-hudi-1         | 	at org.apache.spark.sql.SparkSession.sharedState$lzycompute(SparkSession.scala:139)
spark-hudi-1         | 	at org.apache.spark.sql.SparkSession.sharedState(SparkSession.scala:138)
spark-hudi-1         | 	at org.apache.spark.sql.SparkSession.$anonfun$sessionState$2(SparkSession.scala:158)
spark-hudi-1         | 	at scala.Option.getOrElse(Option.scala:189)
spark-hudi-1         | 	at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:156)
spark-hudi-1         | 	at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:153)
spark-hudi-1         | 	at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:62)
spark-hudi-1         | 	at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:96)
spark-hudi-1         | 	at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
spark-hudi-1         | 	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
spark-hudi-1         | 	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
spark-hudi-1         | 	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
spark-hudi-1         | 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
spark-hudi-1         | Caused by: java.net.UnknownHostException: hadoop-master
spark-hudi-1         | 	... 38 more
spark-hudi-1         | 2391 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@33bb3f86{/SQL,null,AVAILABLE,@Spark}
spark-hudi-1         | 2393 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@6ac4c3f7{/SQL/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 2395 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@1150d471{/SQL/execution,null,AVAILABLE,@Spark}
spark-hudi-1         | 2396 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@654e6a90{/SQL/execution/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 2407 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@36ef1d65{/static/sql,null,AVAILABLE,@Spark}
hive-metastore-1     | 2024-02-02T21:27:53,945  WARN [main] metastore.ObjectStore: Failed to get database hive.default, returning NoSuchObjectException
hive-metastore-1     | 2024-02-02T21:27:53,970  INFO [main] metastore.HiveMetaStore: Added admin role in metastore
hive-metastore-1     | 2024-02-02T21:27:53,971  INFO [main] metastore.HiveMetaStore: Added public role in metastore
hive-metastore-1     | 2024-02-02T21:27:53,991  INFO [main] metastore.HiveMetaStore: Added admin to admin role
hive-metastore-1     | 2024-02-02T21:27:54,024  INFO [main] conf.HiveConf: Found configuration file file:/opt/apache-hive-3.1.2-bin/conf/hive-site.xml
hive-metastore-1     | 2024-02-02T21:27:54,103  INFO [main] metastore.HiveMetaStore: Starting DB backed MetaStore Server with SetUGI enabled
hive-metastore-1     | 2024-02-02T21:27:54,105  INFO [main] metastore.HiveMetaStore: Started the new metaserver on port [9083]...
hive-metastore-1     | 2024-02-02T21:27:54,105  INFO [main] metastore.HiveMetaStore: Options.minWorkerThreads = 200
hive-metastore-1     | 2024-02-02T21:27:54,105  INFO [main] metastore.HiveMetaStore: Options.maxWorkerThreads = 1000
hive-metastore-1     | 2024-02-02T21:27:54,105  INFO [main] metastore.HiveMetaStore: TCP keepalive = true
hive-metastore-1     | 2024-02-02T21:27:54,105  INFO [main] metastore.HiveMetaStore: Enable SSL = false
spark-hudi-1         | 3170 [main] INFO  org.apache.spark.sql.hive.HiveUtils  - Initializing HiveMetastoreConnection version 2.3.9 using Spark classes.
spark-hudi-1         | 3482 [main] INFO  org.apache.spark.sql.hive.client.HiveClientImpl  - Warehouse location for Hive client (version 2.3.9) is hdfs://hadoop-master:9000/user/hive/warehouse
spark-hudi-1         | 3522 [main] INFO  hive.metastore  - Trying to connect to metastore with URI thrift://hive-metastore:9083
spark-hudi-1         | 3546 [main] INFO  hive.metastore  - Opened a connection to metastore, current connections: 1
spark-hudi-1         | 3623 [main] INFO  hive.metastore  - Connected to metastore.
hive-metastore-1     | 2024-02-02T21:27:55,013  INFO [pool-8-thread-1] metastore.HiveMetaStore: 1: source:192.168.112.6 get_database: default
hive-metastore-1     | 2024-02-02T21:27:55,014  INFO [pool-8-thread-1] HiveMetaStore.audit: ugi=root	ip=192.168.112.6	cmd=source:192.168.112.6 get_database: default
hive-metastore-1     | 2024-02-02T21:27:55,014  INFO [pool-8-thread-1] metastore.HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
hive-metastore-1     | 2024-02-02T21:27:55,014  WARN [pool-8-thread-1] metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
hive-metastore-1     | 2024-02-02T21:27:55,014  INFO [pool-8-thread-1] metastore.ObjectStore: ObjectStore, initialize called
hive-metastore-1     | 2024-02-02T21:27:55,016  INFO [pool-8-thread-1] metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is POSTGRES
hive-metastore-1     | 2024-02-02T21:27:55,017  INFO [pool-8-thread-1] metastore.ObjectStore: Initialized ObjectStore
spark-hudi-1         | 3710 [main] INFO  org.apache.spark.sql.hive.HiveUtils  - Initializing execution hive, version 2.3.9
spark-hudi-1         | 3720 [main] INFO  org.apache.spark.sql.hive.client.HiveClientImpl  - Warehouse location for Hive client (version 2.3.9) is hdfs://hadoop-master:9000/user/hive/warehouse
spark-hudi-1         | 3746 [main] INFO  org.apache.hive.service.cli.session.SessionManager  - Operation log root directory is created: /tmp/root/operation_logs
spark-hudi-1         | 3750 [main] INFO  org.apache.hive.service.cli.session.SessionManager  - HiveServer2: Background operation thread pool size: 100
spark-hudi-1         | 3750 [main] INFO  org.apache.hive.service.cli.session.SessionManager  - HiveServer2: Background operation thread wait queue size: 100
spark-hudi-1         | 3751 [main] INFO  org.apache.hive.service.cli.session.SessionManager  - HiveServer2: Background operation thread keepalive time: 10 seconds
spark-hudi-1         | 3755 [main] INFO  org.apache.hive.service.AbstractService  - Service:OperationManager is inited.
spark-hudi-1         | 3755 [main] INFO  org.apache.hive.service.AbstractService  - Service:SessionManager is inited.
spark-hudi-1         | 3756 [main] INFO  org.apache.hive.service.AbstractService  - Service: CLIService is inited.
spark-hudi-1         | 3756 [main] INFO  org.apache.hive.service.AbstractService  - Service:ThriftBinaryCLIService is inited.
spark-hudi-1         | 3756 [main] INFO  org.apache.hive.service.AbstractService  - Service: HiveServer2 is inited.
spark-hudi-1         | 3757 [main] INFO  org.apache.hive.service.AbstractService  - Service:OperationManager is started.
spark-hudi-1         | 3757 [main] INFO  org.apache.hive.service.AbstractService  - Service:SessionManager is started.
spark-hudi-1         | 3758 [main] INFO  org.apache.hive.service.AbstractService  - Service: CLIService is started.
spark-hudi-1         | 3758 [main] INFO  org.apache.hive.service.AbstractService  - Service:ThriftBinaryCLIService is started.
spark-hudi-1         | 3793 [main] INFO  org.apache.hive.service.cli.thrift.ThriftCLIService  - Starting ThriftBinaryCLIService on port 10213 with 5...500 worker threads
spark-hudi-1         | 3793 [main] INFO  org.apache.hive.service.AbstractService  - Service:HiveServer2 is started.
spark-hudi-1         | 3794 [main] INFO  org.apache.spark.sql.hive.thriftserver.HiveThriftServer2  - HiveThriftServer2 started
spark-hudi-1         | 3809 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@79f1e22e{/sqlserver,null,AVAILABLE,@Spark}
spark-hudi-1         | 3810 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@e154848{/sqlserver/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 3810 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@74c7522c{/sqlserver/session,null,AVAILABLE,@Spark}
spark-hudi-1         | 3811 [main] INFO  org.sparkproject.jetty.server.handler.ContextHandler  - Started o.s.j.s.ServletContextHandler@6d17914a{/sqlserver/session/json,null,AVAILABLE,@Spark}
spark-hudi-1         | 13941 [ThriftBinaryCLIService] WARN  org.apache.thrift.server.TThreadPoolServer  - Transport error occurred during acceptance of message.
spark-hudi-1         | org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Accept timed out
spark-hudi-1         | 	at org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:134)
spark-hudi-1         | 	at org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:35)
spark-hudi-1         | 	at org.apache.thrift.transport.TServerTransport.accept(TServerTransport.java:60)
spark-hudi-1         | 	at org.apache.thrift.server.TThreadPoolServer.execute(TThreadPoolServer.java:185)
spark-hudi-1         | 	at org.apache.thrift.server.TThreadPoolServer.serve(TThreadPoolServer.java:175)
spark-hudi-1         | 	at org.apache.hive.service.cli.thrift.ThriftBinaryCLIService.run(ThriftBinaryCLIService.java:135)
spark-hudi-1         | 	at java.base/java.lang.Thread.run(Thread.java:829)
spark-hudi-1         | Caused by: java.net.SocketTimeoutException: Accept timed out
spark-hudi-1         | 	at java.base/java.net.PlainSocketImpl.socketAccept(Native Method)
spark-hudi-1         | 	at java.base/java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:474)
spark-hudi-1         | 	at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:576)
spark-hudi-1         | 	at java.base/java.net.ServerSocket.accept(ServerSocket.java:539)
spark-hudi-1         | 	at org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:129)
spark-hudi-1         | 	... 6 more

@alberttwong alberttwong changed the title issue with hudi/trino-hudi-minio issue with hudi/trino-hudi-minio. HMS won't startup. Missing instructions Feb 2, 2024
@alberttwong
Copy link
Author

let assume that we can run the scala code.

scala> import org.apache.spark.sql.functions._
import org.apache.spark.sql.functions._

scala> import org.apache.spark.sql.types._
import org.apache.spark.sql.types._

scala> import org.apache.spark.sql.Row
import org.apache.spark.sql.Row

scala> import org.apache.spark.sql.SaveMode._
import org.apache.spark.sql.SaveMode._

scala> import org.apache.hudi.DataSourceReadOptions._
import org.apache.hudi.DataSourceReadOptions._

scala> import org.apache.hudi.DataSourceWriteOptions._
import org.apache.hudi.DataSourceWriteOptions._

scala> import org.apache.hudi.config.HoodieWriteConfig._
import org.apache.hudi.config.HoodieWriteConfig._

scala> import scala.collection.JavaConversions._
import scala.collection.JavaConversions._

scala> 

scala> val schema = StructType( Array(
     |                  StructField("language", StringType, true),
     |                  StructField("users", StringType, true),
     |                  StructField("id", StringType, true)
     |              ))
schema: org.apache.spark.sql.types.StructType = StructType(StructField(language,StringType,true), StructField(users,StringType,true), StructField(id,StringType,true))

scala> 

scala> val rowData= Seq(Row("Java", "20000", "a"), 
     |                Row("Python", "100000", "b"), 
     |                Row("Scala", "3000", "c"))
rowData: Seq[org.apache.spark.sql.Row] = List([Java,20000,a], [Python,100000,b], [Scala,3000,c])

scala> 

scala> 

scala> val df = spark.createDataFrame(rowData,schema)
warning: one deprecation (since 2.12.0); for details, enable `:setting -deprecation' or `:replay -deprecation'
302793 [main] WARN  org.apache.hadoop.fs.FileSystem  - Failed to initialize fileystem hdfs://hadoop-master:9000/user/hive/warehouse: java.lang.IllegalArgumentException: java.net.UnknownHostException: hadoop-master
302798 [main] WARN  org.apache.spark.sql.internal.SharedState  - Cannot qualify the warehouse path, leaving it unqualified.
java.lang.IllegalArgumentException: java.net.UnknownHostException: hadoop-master
        at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:466)
        at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:134)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:374)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:308)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initDFSClient(DistributedFileSystem.java:201)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:186)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
        at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
        at org.apache.spark.sql.internal.SharedState$.qualifyWarehousePath(SharedState.scala:282)
        at org.apache.spark.sql.internal.SharedState.liftedTree1$1(SharedState.scala:80)
        at org.apache.spark.sql.internal.SharedState.<init>(SharedState.scala:79)
        at org.apache.spark.sql.SparkSession.$anonfun$sharedState$1(SparkSession.scala:139)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.spark.sql.SparkSession.sharedState$lzycompute(SparkSession.scala:139)
        at org.apache.spark.sql.SparkSession.sharedState(SparkSession.scala:138)
        at org.apache.spark.sql.SparkSession.$anonfun$sessionState$2(SparkSession.scala:158)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:156)
        at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:153)
        at org.apache.spark.sql.SparkSession.$anonfun$new$3(SparkSession.scala:113)
        at scala.Option.map(Option.scala:230)
        at org.apache.spark.sql.SparkSession.$anonfun$new$1(SparkSession.scala:113)
        at org.apache.spark.sql.internal.SQLConf$.get(SQLConf.scala:230)
        at org.apache.spark.sql.catalyst.util.CharVarcharUtils$.failIfHasCharVarchar(CharVarcharUtils.scala:63)
        at org.apache.spark.sql.SparkSession.$anonfun$createDataFrame$4(SparkSession.scala:387)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
        at org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:386)
        at $line24.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:46)
        at $line24.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:50)
        at $line24.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:52)
        at $line24.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:54)
        at $line24.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:56)
        at $line24.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:58)
        at $line24.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:60)
        at $line24.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:62)
        at $line24.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:64)
        at $line24.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:66)
        at $line24.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:68)
        at $line24.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:70)
        at $line24.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:72)
        at $line24.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:74)
        at $line24.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:76)
        at $line24.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:78)
        at $line24.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:80)
        at $line24.$read$$iw$$iw$$iw$$iw$$iw.<init>(<console>:82)
        at $line24.$read$$iw$$iw$$iw$$iw.<init>(<console>:84)
        at $line24.$read$$iw$$iw$$iw.<init>(<console>:86)
        at $line24.$read$$iw$$iw.<init>(<console>:88)
        at $line24.$read$$iw.<init>(<console>:90)
        at $line24.$read.<init>(<console>:92)
        at $line24.$read$.<init>(<console>:96)
        at $line24.$read$.<clinit>(<console>)
        at $line24.$eval$.$print$lzycompute(<console>:7)
        at $line24.$eval$.$print(<console>:6)
        at $line24.$eval.$print(<console>)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:747)
        at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1020)
        at scala.tools.nsc.interpreter.IMain.$anonfun$interpret$1(IMain.scala:568)
        at scala.reflect.internal.util.ScalaClassLoader.asContext(ScalaClassLoader.scala:36)
        at scala.reflect.internal.util.ScalaClassLoader.asContext$(ScalaClassLoader.scala:116)
        at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:41)
        at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:567)
        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:594)
        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:564)
        at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:865)
        at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:733)
        at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:435)
        at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:456)
        at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:239)
        at org.apache.spark.repl.Main$.doMain(Main.scala:78)
        at org.apache.spark.repl.Main$.main(Main.scala:58)
        at org.apache.spark.repl.Main.main(Main.scala)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.UnknownHostException: hadoop-master
        ... 92 more
df: org.apache.spark.sql.DataFrame = [language: string, users: string ... 1 more field]

scala> 

scala> val tableName = "hudi_coders_hive"
tableName: String = hudi_coders_hive

scala> val basePath = "s3a://huditest/hudi_coders"
basePath: String = s3a://huditest/hudi_coders

scala> 

scala> df.write.format("hudi").
     |   option(TABLE_NAME, tableName).
     |   option(RECORDKEY_FIELD_OPT_KEY, "id").
     |   option(PARTITIONPATH_FIELD_OPT_KEY, "language").
     |   option(PRECOMBINE_FIELD_OPT_KEY, "users").
     |   option("hoodie.datasource.write.hive_style_partitioning", "true").
     |   option("hoodie.datasource.hive_sync.enable", "true").
     |   option("hoodie.datasource.hive_sync.mode", "hms").
     |   option("hoodie.datasource.hive_sync.database", "default").
     |   option("hoodie.datasource.hive_sync.table", tableName).
     |   option("hoodie.datasource.hive_sync.partition_fields", "language").
     |   option("hoodie.datasource.hive_sync.partition_extractor_class", "org.apache.hudi.hive.MultiPartKeysValueExtractor").
     |   option("hoodie.datasource.hive_sync.metastore.uris", "thrift://hive-metastore:9083").
     |   mode(Overwrite).
     |   save(basePath)
warning: one deprecation; for details, enable `:setting -deprecation' or `:replay -deprecation'
307181 [main] WARN  org.apache.hadoop.metrics2.impl.MetricsConfig  - Cannot locate configuration: tried hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
org.apache.hadoop.fs.s3a.UnknownStoreException: s3a://huditest/hudi_coders/.hoodie
  at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:257)
  at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:170)
  at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3348)
  at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185)
  at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3053)
  at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)
  at org.apache.hadoop.fs.s3a.S3AFileSystem.exists(S3AFileSystem.java:4263)
  at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:84)
  at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:171)
  at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
  at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:110)
  at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
  at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
  at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
  at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
  at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:110)
  at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:106)
  at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:481)
  at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:82)
  at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:481)
  at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
  at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
  at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
  at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
  at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
  at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:457)
  at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:106)
  at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:93)
  at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:91)
  at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:128)
  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:848)
  at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:382)
  at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:355)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239)
  ... 75 elided
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The specified bucket does not exist (Service: Amazon S3; Status Code: 404; Error Code: NoSuchBucket; Request ID: 17B02AB6DCD308E0; S3 Extended Request ID: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8; Proxy: null)
  at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1862)
  at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1415)
  at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1384)
  at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1154)
  at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:811)
  at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:779)
  at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:753)
  at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:713)
  at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:695)
  at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:559)
  at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:539)
  at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5437)
  at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5384)
  at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5378)
  at com.amazonaws.services.s3.AmazonS3Client.listObjectsV2(AmazonS3Client.java:970)
  at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listObjects$7(S3AFileSystem.java:2116)
  at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:489)
  at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:412)
  at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:375)
  at org.apache.hadoop.fs.s3a.S3AFileSystem.listObjects(S3AFileSystem.java:2107)
  at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3322)
  ... 110 more

@alberttwong
Copy link
Author

create the missing s3 bucket.

scala> import org.apache.spark.sql.functions._
import org.apache.spark.sql.functions._

scala> import org.apache.spark.sql.types._
import org.apache.spark.sql.types._

scala> import org.apache.spark.sql.Row
import org.apache.spark.sql.Row

scala> import org.apache.spark.sql.SaveMode._
import org.apache.spark.sql.SaveMode._

scala> import org.apache.hudi.DataSourceReadOptions._
import org.apache.hudi.DataSourceReadOptions._

scala> import org.apache.hudi.DataSourceWriteOptions._
import org.apache.hudi.DataSourceWriteOptions._

scala> import org.apache.hudi.config.HoodieWriteConfig._
import org.apache.hudi.config.HoodieWriteConfig._

scala> import scala.collection.JavaConversions._
import scala.collection.JavaConversions._

scala> 

scala> val schema = StructType( Array(
     |                  StructField("language", StringType, true),
     |                  StructField("users", StringType, true),
     |                  StructField("id", StringType, true)
     |              ))
schema: org.apache.spark.sql.types.StructType = StructType(StructField(language,StringType,true), StructField(users,StringType,true), StructField(id,StringType,true))

scala> 

scala> val rowData= Seq(Row("Java", "20000", "a"), 
     |                Row("Python", "100000", "b"), 
     |                Row("Scala", "3000", "c"))
rowData: Seq[org.apache.spark.sql.Row] = List([Java,20000,a], [Python,100000,b], [Scala,3000,c])

scala> 

scala> 

scala> val df = spark.createDataFrame(rowData,schema)
warning: one deprecation (since 2.12.0); for details, enable `:setting -deprecation' or `:replay -deprecation'
df: org.apache.spark.sql.DataFrame = [language: string, users: string ... 1 more field]

scala> 

scala> val tableName = "hudi_coders_hive"
tableName: String = hudi_coders_hive

scala> val basePath = "s3a://huditest/hudi_coders"
basePath: String = s3a://huditest/hudi_coders

scala> 

scala> df.write.format("hudi").
     |   option(TABLE_NAME, tableName).
     |   option(RECORDKEY_FIELD_OPT_KEY, "id").
     |   option(PARTITIONPATH_FIELD_OPT_KEY, "language").
     |   option(PRECOMBINE_FIELD_OPT_KEY, "users").
     |   option("hoodie.datasource.write.hive_style_partitioning", "true").
     |   option("hoodie.datasource.hive_sync.enable", "true").
     |   option("hoodie.datasource.hive_sync.mode", "hms").
     |   option("hoodie.datasource.hive_sync.database", "default").
     |   option("hoodie.datasource.hive_sync.table", tableName).
     |   option("hoodie.datasource.hive_sync.partition_fields", "language").
     |   option("hoodie.datasource.hive_sync.partition_extractor_class", "org.apache.hudi.hive.MultiPartKeysValueExtractor").
     |   option("hoodie.datasource.hive_sync.metastore.uris", "thrift://hive-metastore:9083").
     |   mode(Overwrite).
     |   save(basePath)
warning: one deprecation; for details, enable `:setting -deprecation' or `:replay -deprecation'
856073 [main] WARN  org.apache.hudi.common.config.DFSPropertiesConfiguration  - Cannot find HUDI_CONF_DIR, please set it as the dir of hudi-defaults.conf
856107 [main] WARN  org.apache.hudi.common.config.DFSPropertiesConfiguration  - Properties file file:/etc/hudi/conf/hudi-defaults.conf not found. Ignoring to load props file
858312 [main] WARN  org.apache.hudi.metadata.HoodieBackedTableMetadata  - Metadata table was not found at path s3a://huditest/hudi_coders/.hoodie/metadata
858778 [main] WARN  org.apache.hadoop.fs.s3a.S3ABlockOutputStream  - Application invoked the Syncable API against stream writing to hudi_coders/.hoodie/metadata/files/.files-0000_00000000000000.log.1_0-0-0. This is unsupported
863754 [Executor task launch worker for task 0.0 in stage 6.0 (TID 6)] WARN  org.apache.hadoop.metrics2.impl.MetricsConfig  - Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties

@alberttwong
Copy link
Author

The problem now is that I don't see any tables or data in HMS.

@alberttwong
Copy link
Author

my current docker-compose.yaml

version: '3.7'
services:
  starrocks:
    image: starrocks/allin1-ubuntu
    hostname: starrocks-fe
    ports:
      - 8030:8030
      - 8040:8040
      - 9030:9030
    networks:
      - trino-network
    links:
      - "hive-metastore"

  trino-coordinator:
    image: 'trinodb/trino:latest'
    hostname: trino-coordinator
    ports:
      - '8080:8080'
    volumes:
      - ./etc:/etc/trino
    networks:
      - trino-network

  spark-hudi:
    image: 'ghcr.io/trinodb/testing/spark3-hudi:latest'
    hostname: spark-hudi
    depends_on:
      - hive-metastore
      - minio
    volumes:
      - ./conf/spark-defaults.conf:/spark-3.2.1-bin-hadoop3.2/conf/spark-defaults.conf:ro
      - ./conf/core-site.xml:/spark-3.2.1-bin-hadoop3.2/conf/core-site.xml:ro
    environment:
      MINIO_ACCESS_KEY: minio
      MINIO_SECRET_KEY: minio123
    networks:
      - trino-network

  metastore_db:
    image: postgres:11
    hostname: metastore_db
    environment:
      POSTGRES_USER: admin
      POSTGRES_PASSWORD: admin
      POSTGRES_DB: metastore_db
    networks:
      - trino-network

  hive-metastore:
    hostname: hive-metastore
    image: 'starburstdata/hive:3.1.2-e.18'
    ports:
      - '9083:9083' # Metastore Thrift
    environment:
      HIVE_METASTORE_DRIVER: org.postgresql.Driver
      HIVE_METASTORE_JDBC_URL: jdbc:postgresql://metastore_db:5432/metastore_db
      HIVE_METASTORE_USER: admin
      HIVE_METASTORE_PASSWORD: admin
      HIVE_METASTORE_WAREHOUSE_DIR: s3://datalake/
      S3_ENDPOINT: http://minio:9000
      S3_ACCESS_KEY: minio
      S3_SECRET_KEY: minio123
      S3_PATH_STYLE_ACCESS: "true"
      REGION: ""
      GOOGLE_CLOUD_KEY_FILE_PATH: ""
      AZURE_ADL_CLIENT_ID: ""
      AZURE_ADL_CREDENTIAL: ""
      AZURE_ADL_REFRESH_URL: ""
      AZURE_ABFS_STORAGE_ACCOUNT: ""
      AZURE_ABFS_ACCESS_KEY: ""
      AZURE_WASB_STORAGE_ACCOUNT: ""
      AZURE_ABFS_OAUTH: ""
      AZURE_ABFS_OAUTH_TOKEN_PROVIDER: ""
      AZURE_ABFS_OAUTH_CLIENT_ID: ""
      AZURE_ABFS_OAUTH_SECRET: ""
      AZURE_ABFS_OAUTH_ENDPOINT: ""
      AZURE_WASB_ACCESS_KEY: ""
      HIVE_METASTORE_USERS_IN_ADMIN_ROLE: "admin"
    depends_on:
      - metastore_db
    networks:
      - trino-network

  minio:
    image: 'minio/minio:latest'
    hostname: minio
    ports:
      - '9000:9000'
      - '9001:9001'
    volumes:
      - minio-data:/data
    environment:
      MINIO_ACCESS_KEY: minio
      MINIO_SECRET_KEY: minio123
    command: server --console-address ":9001" /data
    networks:
      - trino-network

volumes:
  minio-data:
    driver: local

networks:
  trino-network:
    driver: bridge
    name: trino

@bitsondatadev
Copy link
Owner

@mosabua it may make sense to get this back to stable and do a follow up episode on TCB to do an updated demo and status of the connector.

@bitsondatadev
Copy link
Owner

Thanks for the update @alberttwong I'm happy to move to PostgreSQL in the backend. I think someone added it. I want to use my own for now though as I would like to have standard settings set up for instructional purposes. Plus this HMS already has some small community updates so I'd like to fix it.

@alberttwong
Copy link
Author

So I got it working by changing out the HMS. https://github.com/alberttwong/demo/tree/master/documentation-samples/hudi. I'll be moving this to the official StarRocks demo repo.

@alberttwong
Copy link
Author

there is an apache hive image. I'll try later to switch to it. Any reason why this wasn't used instead of custom HMS + psql?

  hive-metastore:
    container_name: hive-metastore
    hostname: hive-metastore
    image: 'apache/hive:4.0.0-alpha-2'
    ports:
      - '9083:9083' # Metastore Thrift
    environment:
      SERVICE_NAME: metastore
      HIVE_METASTORE_WAREHOUSE_DIR: /home/data
    volumes:
      - ./data:/home/data

@bitsondatadev
Copy link
Owner

Looks like this fixed the issue.

#38 (comment)

Any reason why this wasn't used instead of custom HMS + psql?

There were some oddities about earlier HMS images in 3.x. If the latest 4.x metastore has those worked out then I would love to get rid of my HMS Docker image.

I'll open a new Issue to remind myself to look at that later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants