Skip to content

Commit

Permalink
[doc] Update doc style to fix minor typos (#4465)
Browse files Browse the repository at this point in the history
  • Loading branch information
Edward-Gavin authored Nov 7, 2024
1 parent c4f1273 commit da3e795
Show file tree
Hide file tree
Showing 6 changed files with 18 additions and 18 deletions.
4 changes: 2 additions & 2 deletions docs/content/engines/doris.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,13 +73,13 @@ See [Apache Doris Website](https://doris.apache.org/docs/lakehouse/datalake-anal

1. Query Paimon table with full qualified name

```
```sql
SELECT * FROM paimon_hdfs.paimon_db.paimon_table;
```

2. Switch to Paimon Catalog and query

```
```sql
SWITCH paimon_hdfs;
USE paimon_db;
SELECT * FROM paimon_table;
Expand Down
8 changes: 4 additions & 4 deletions docs/content/engines/trino.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,9 +34,9 @@ Paimon currently supports Trino 420 and above.

## Filesystem

From version 0.8, paimon share trino filesystem for all actions, which means, you should
config trino filesystem before using trino-paimon. You can find information about how to config
filesystems for trino on trino official website.
From version 0.8, Paimon share Trino filesystem for all actions, which means, you should
config Trino filesystem before using trino-paimon. You can find information about how to config
filesystems for Trino on Trino official website.

## Preparing Paimon Jar File

Expand Down Expand Up @@ -113,7 +113,7 @@ If you are using HDFS, choose one of the following ways to configure your HDFS:
- set environment variable HADOOP_CONF_DIR.
- configure `hadoop-conf-dir` in the properties.

If you are using a hadoop filesystem, you can still use trino-hdfs and trino-hive to config it.
If you are using a Hadoop filesystem, you can still use trino-hdfs and trino-hive to config it.
For example, if you use oss as a storage, you can write in `paimon.properties` according to [Trino Reference](https://trino.io/docs/current/connector/hive.html#hdfs-configuration):

```
Expand Down
8 changes: 4 additions & 4 deletions docs/content/flink/action-jars.md
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ For more information of 'delete', see
## Drop Partition
Run the following command to submit a drop_partition job for the table.
Run the following command to submit a 'drop_partition' job for the table.
```bash
<FLINK_HOME>/bin/flink run \
Expand All @@ -276,7 +276,7 @@ partition_spec:
key1=value1,key2=value2...
```
For more information of drop_partition, see
For more information of 'drop_partition', see
```bash
<FLINK_HOME>/bin/flink run \
Expand All @@ -286,7 +286,7 @@ For more information of drop_partition, see
## Rewrite File Index
Run the following command to submit a rewrite_file_index job for the table.
Run the following command to submit a 'rewrite_file_index' job for the table.
```bash
<FLINK_HOME>/bin/flink run \
Expand All @@ -297,7 +297,7 @@ Run the following command to submit a rewrite_file_index job for the table.
[--catalog_conf <paimon-catalog-conf> [--catalog_conf <paimon-catalog-conf> ...]]
```
For more information of rewrite_file_index, see
For more information of 'rewrite_file_index', see
```bash
<FLINK_HOME>/bin/flink run \
Expand Down
4 changes: 2 additions & 2 deletions docs/content/flink/clone-tables.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,10 +39,10 @@ However, if you want to clone the table while writing it at the same time, submi

```sql
CALL sys.clone(
warehouse => 'source_warehouse_path`,
warehouse => 'source_warehouse_path',
[`database` => 'source_database_name',]
[`table` => 'source_table_name',]
target_warehouse => 'target_warehouse_path`,
target_warehouse => 'target_warehouse_path',
[target_database => 'target_database_name',]
[target_table => 'target_table_name',]
[parallelism => <parallelism>]
Expand Down
2 changes: 1 addition & 1 deletion docs/content/flink/expire-partition.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ More options:
<td><h5>end-input.check-partition-expire</h5></td>
<td style="word-wrap: break-word;">false</td>
<td>Boolean</td>
<td>Whether check partition expire after batch mode or bounded stream job finish.</li></ul></td>
<td>Whether check partition expire after batch mode or bounded stream job finish.</td>
</tr>
</tbody>
</table>
10 changes: 5 additions & 5 deletions docs/content/flink/savepoint.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,12 +41,12 @@ metadata left. This is very safe, so we recommend using this feature to stop and

## Tag with Savepoint

In Flink, we may consume from kafka and then write to paimon. Since flink's checkpoint only retains a limited number,
In Flink, we may consume from Kafka and then write to Paimon. Since Flink's checkpoint only retains a limited number,
we will trigger a savepoint at certain time (such as code upgrades, data updates, etc.) to ensure that the state can
be retained for a longer time, so that the job can be restored incrementally.

Paimon's snapshot is similar to flink's checkpoint, and both will automatically expire, but the tag feature of paimon
allows snapshots to be retained for a long time. Therefore, we can combine the two features of paimon's tag and flink's
Paimon's snapshot is similar to Flink's checkpoint, and both will automatically expire, but the tag feature of Paimon
allows snapshots to be retained for a long time. Therefore, we can combine the two features of Paimon's tag and Flink's
savepoint to achieve incremental recovery of job from the specified savepoint.

{{< hint warning >}}
Expand All @@ -64,7 +64,7 @@ You can set `sink.savepoint.auto-tag` to `true` to enable the feature of automat

**Step 2: Trigger savepoint.**

You can refer to [flink savepoint](https://nightlies.apache.org/flink/flink-docs-stable/docs/ops/state/savepoints/#operations)
You can refer to [Flink savepoint](https://nightlies.apache.org/flink/flink-docs-stable/docs/ops/state/savepoints/#operations)
to learn how to configure and trigger savepoint.

**Step 3: Choose the tag corresponding to the savepoint.**
Expand All @@ -74,7 +74,7 @@ The tag corresponding to the savepoint will be named in the form of `savepoint-$

**Step 4: Rollback the paimon table.**

[Rollback]({{< ref "maintenance/manage-tags#rollback-to-tag" >}}) the paimon table to the specified tag.
[Rollback]({{< ref "maintenance/manage-tags#rollback-to-tag" >}}) the Paimon table to the specified tag.

**Step 5: Restart from the savepoint.**

Expand Down

0 comments on commit da3e795

Please sign in to comment.