From 3897d9b2058324207b47e98ffbdd4a7b3d08f68a Mon Sep 17 00:00:00 2001 From: yuqi Date: Wed, 15 Jan 2025 16:01:33 +0800 Subject: [PATCH 1/4] Fix several document errors --- docs/hadoop-catalog-with-gcs.md | 2 +- docs/hadoop-catalog-with-oss.md | 5 ++--- docs/hive-catalog-with-cloud-storage.md | 6 ++++++ 3 files changed, 9 insertions(+), 4 deletions(-) diff --git a/docs/hadoop-catalog-with-gcs.md b/docs/hadoop-catalog-with-gcs.md index 5422047efd8..29465c25493 100644 --- a/docs/hadoop-catalog-with-gcs.md +++ b/docs/hadoop-catalog-with-gcs.md @@ -47,7 +47,7 @@ Refer to [Fileset configurations](./hadoop-catalog.md#fileset-properties) for mo This section will show you how to use the Hadoop catalog with GCS in Gravitino, including detailed examples. -### Create a Hadoop catalog with GCS +### Step1: Create a Hadoop catalog with GCS First, you need to create a Hadoop catalog with GCS. The following example shows how to create a Hadoop catalog with GCS: diff --git a/docs/hadoop-catalog-with-oss.md b/docs/hadoop-catalog-with-oss.md index b9ef5f44e27..f330f7ede9b 100644 --- a/docs/hadoop-catalog-with-oss.md +++ b/docs/hadoop-catalog-with-oss.md @@ -123,7 +123,7 @@ oss_catalog = gravitino_client.create_catalog(name="test_catalog", -Step 2: Create a Schema +### Step 2: Create a Schema Once the Hadoop catalog with OSS is created, you can create a schema inside that catalog. Below are examples of how to do this: @@ -174,11 +174,10 @@ catalog.as_schemas().create_schema(name="test_schema", -### Create a fileset +### Step3: Create a fileset Now that the schema is created, you can create a fileset inside it. Here’s how: - diff --git a/docs/hive-catalog-with-cloud-storage.md b/docs/hive-catalog-with-cloud-storage.md index 49a018907b4..e8756c5113b 100644 --- a/docs/hive-catalog-with-cloud-storage.md +++ b/docs/hive-catalog-with-cloud-storage.md @@ -84,8 +84,14 @@ cp ${HADOOP_HOME}/share/hadoop/tools/lib/*aws* ${HIVE_HOME}/lib # For Azure Blob Storage(ADLS) cp ${HADOOP_HOME}/share/hadoop/tools/lib/*azure* ${HIVE_HOME}/lib + +# For Google Cloud Storage(GCS) +cp gcs-connector-hadoop3-2.2.22-shaded.jar ${HIVE_HOME}/lib ``` +[`gcs-connector-hadoop3-2.2.22-shaded.jar`](https://github.com/GoogleCloudDataproc/hadoop-connectors/releases/download/v2.2.22/gcs-connector-hadoop2-2.2.22-shaded.jar) is the bundle jar that contains Hadoop GCS connector, you need to choose the corresponding gcs connector jar for the version of Hadoop you are using. + + Alternatively, you can download the required JARs from the Maven repository and place them in the Hive classpath. It is crucial to verify that the JARs are compatible with the version of Hadoop you are using to avoid any compatibility issue. ### Restart Hive metastore From 63768098fec4f7fdc5ae2d934901a32fbfc0d2cd Mon Sep 17 00:00:00 2001 From: yuqi Date: Wed, 15 Jan 2025 16:26:52 +0800 Subject: [PATCH 2/4] fix --- docs/hive-catalog-with-cloud-storage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/hive-catalog-with-cloud-storage.md b/docs/hive-catalog-with-cloud-storage.md index e8756c5113b..3a16f831e6c 100644 --- a/docs/hive-catalog-with-cloud-storage.md +++ b/docs/hive-catalog-with-cloud-storage.md @@ -271,7 +271,7 @@ To access S3-stored tables using Spark, you need to configure the SparkSession a sparkSession.sql("..."); ``` -:::Note +:::note Please download [Hadoop AWS jar](https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-aws), [aws java sdk jar](https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-bundle) and place them in the classpath of the Spark. If the JARs are missing, Spark will not be able to access the S3 storage. Azure Blob Storage(ADLS) requires the [Hadoop Azure jar](https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-azure), [Azure cloud sdk jar](https://mvnrepository.com/artifact/com.azure/azure-storage-blob) to be placed in the classpath of the Spark. for Google Cloud Storage(GCS), you need to download the [Hadoop GCS jar](https://github.com/GoogleCloudDataproc/hadoop-connectors/releases) and place it in the classpath of the Spark. From f14c297eda9abb73eb4634460ad7d8734f9adfa5 Mon Sep 17 00:00:00 2001 From: yuqi Date: Wed, 15 Jan 2025 18:04:35 +0800 Subject: [PATCH 3/4] fix --- docs/hive-catalog-with-cloud-storage.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/hive-catalog-with-cloud-storage.md b/docs/hive-catalog-with-cloud-storage.md index 3a16f831e6c..9336175ff5d 100644 --- a/docs/hive-catalog-with-cloud-storage.md +++ b/docs/hive-catalog-with-cloud-storage.md @@ -1,8 +1,8 @@ --- -title: "Hive catalog with s3 and adls" +title: "Hive catalog with S3, ADLS and GCS" slug: /hive-catalog date: 2024-9-24 -keyword: Hive catalog cloud storage S3 ADLS +keyword: Hive catalog cloud storage S3 ADLS GCS license: "This software is licensed under the Apache License version 2." --- From cb6f382106ff41313075410da8467a577891029d Mon Sep 17 00:00:00 2001 From: yuqi Date: Wed, 15 Jan 2025 20:51:53 +0800 Subject: [PATCH 4/4] Remove blank line. --- docs/hive-catalog-with-cloud-storage.md | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/hive-catalog-with-cloud-storage.md b/docs/hive-catalog-with-cloud-storage.md index 9336175ff5d..b1403ba5e16 100644 --- a/docs/hive-catalog-with-cloud-storage.md +++ b/docs/hive-catalog-with-cloud-storage.md @@ -91,7 +91,6 @@ cp gcs-connector-hadoop3-2.2.22-shaded.jar ${HIVE_HOME}/lib [`gcs-connector-hadoop3-2.2.22-shaded.jar`](https://github.com/GoogleCloudDataproc/hadoop-connectors/releases/download/v2.2.22/gcs-connector-hadoop2-2.2.22-shaded.jar) is the bundle jar that contains Hadoop GCS connector, you need to choose the corresponding gcs connector jar for the version of Hadoop you are using. - Alternatively, you can download the required JARs from the Maven repository and place them in the Hive classpath. It is crucial to verify that the JARs are compatible with the version of Hadoop you are using to avoid any compatibility issue. ### Restart Hive metastore