Skip to content

Commit

Permalink
[Doc] Autogen nav (backport #51073) (#51080)
Browse files Browse the repository at this point in the history
Signed-off-by: DanRoscigno <[email protected]>
Co-authored-by: Dan Roscigno <[email protected]>
  • Loading branch information
mergify[bot] and DanRoscigno authored Sep 18, 2024
1 parent e97b47c commit 498e1cd
Show file tree
Hide file tree
Showing 110 changed files with 1,585 additions and 2,638 deletions.
11 changes: 6 additions & 5 deletions docs/docusaurus/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,10 @@
"dependencies": {
"@algolia/client-search": "^4.20.0",
"@docsearch/react": "3",
"@docusaurus/core": "^3.5.2",
"@docusaurus/preset-classic": "^3.5.2",
"@docusaurus/theme-search-algolia": "^3.5.2",
"@docusaurus/core": "^3.1.1",
"@docusaurus/plugin-client-redirects": "^3.1.1",
"@docusaurus/preset-classic": "^3.1.1",
"@docusaurus/theme-search-algolia": "^3.1.1",
"@mdx-js/react": "^3.0.0",
"clsx": "^2.0.0",
"fs-extra": "^11.1.1",
Expand All @@ -29,8 +30,8 @@
"react-dom": "^18.2.0"
},
"devDependencies": {
"@docusaurus/module-type-aliases": "^3.5.2",
"@docusaurus/types": "^3.5.2"
"@docusaurus/module-type-aliases": "^3.1.1",
"@docusaurus/types": "^3.1.1"
},
"browserslist": {
"production": [
Expand Down
53 changes: 9 additions & 44 deletions docs/docusaurus/sidebars.json
Original file line number Diff line number Diff line change
Expand Up @@ -102,42 +102,13 @@
"type": "category",
"label": "Table Design",
"link": {
"type": "doc",
"id": "table_design/StarRocks_table_design"
"type": "generated-index"
},
"items": [
{
"type": "category",
"label": "Table types",
"link": {
"type": "doc",
"id": "table_design/table_types/table_types"
},
"items": [
"table_design/table_types/table_capabilities",
"table_design/table_types/duplicate_key_table",
"table_design/table_types/aggregate_table",
"table_design/table_types/unique_key_table",
"table_design/table_types/primary_key_table"
]
},
{
"type": "category",
"label": "Data distribution",
"link": {
"type": "doc",
"id": "table_design/Data_distribution"
},
"items": [
"table_design/expression_partitioning",
"table_design/list_partitioning",
"table_design/dynamic_partitioning",
"table_design/Temporary_partition",
"table_design/feature-support-data-distribution"
]
},
"table_design/data_compression",
"table_design/Sort_key"
"type": "autogenerated",
"dirName": "table_design"
}
]
},
{
Expand Down Expand Up @@ -407,19 +378,13 @@
"type": "category",
"label": "Resource management",
"link": {
"type": "doc",
"id": "administration/management/resource_management/resource_management"
"type": "generated-index"
},
"items": [
"administration/management/resource_management/resource_group",
"administration/management/resource_management/query_queues",
"administration/management/resource_management/Query_management",
"administration/management/resource_management/Memory_management",
"administration/management/resource_management/spill_to_disk",
"administration/management/resource_management/Load_balance",
"administration/management/resource_management/Replica",
"administration/management/resource_management/Blacklist",
"administration/management/resource_management/filemanager"
{
"type": "autogenerated",
"dirName": "administration/management/resource_management"
}
]
}
]
Expand Down
3,608 changes: 1,275 additions & 2,333 deletions docs/docusaurus/yarn.lock

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/en/_assets/commonMarkdown/loadMethodIntro.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ Each of these options has its own advantages, which are detailed in the followin

In most cases, we recommend that you use the INSERT+`FILES()` method, which is much easier to use.

However, the INSERT+`FILES()` method currently supports only the Parquet and ORC file formats. Therefore, if you need to load data of other file formats such as CSV, or [perform data changes such as DELETE during data loading](../../loading/Load_to_Primary_Key_tables.md), you can resort to Broker Load.
However, the INSERT+`FILES()` method currently supports only the Parquet and ORC file formats. Therefore, if you need to load data of other file formats such as CSV, or perform data changes such as DELETE during data loading, you can resort to Broker Load.
1 change: 1 addition & 0 deletions docs/en/_assets/commonMarkdown/multi-service-access.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
For the best practices of multi-service access control, see [Multi-service access control](../../administration/user_privs/User_privilege.md#multi-service-access-control).
5 changes: 5 additions & 0 deletions docs/en/_assets/commonMarkdown/quickstart-iceberg-tip.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@

:::tip
This example uses the Local Climatological Data(LCD) dataset featured in the [StarRocks Basics](../../quick_start/shared-nothing.md) Quick Start. You can load the data and try the example yourself.
:::

3 changes: 3 additions & 0 deletions docs/en/_assets/commonMarkdown/quickstart-overview-tip.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
## Learn by doing

Try the [Quick Starts](../../quick_start/quick_start.mdx) to get an overview of using StarRocks with realistic scenarios.
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@

:::tip
Try Routine Load out in this [Quick Start](../../quick_start/routine-load.md)
:::

5 changes: 5 additions & 0 deletions docs/en/_assets/commonMarkdown/quickstart-shared-data.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@

:::tip
Give [shared-data](../../quick_start/shared-data.md) a try using MinIO for object storage.
:::

Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@

:::tip
This example uses the Local Climatological Data(LCD) dataset featured in the [StarRocks Basics](../../quick_start/shared-nothing.md) Quick Start. You can load the data and try the example yourself.
:::

2 changes: 1 addition & 1 deletion docs/en/administration/management/Backup_and_restore.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ StarRocks supports the following remote storage systems:

StarRocks supports FULL backup on the granularity level of database, table, or partition.

If you have stored a large amount of data in a table, we recommend that you back up and restore data by partition. This way, you can reduce the cost of retries in case of job failures. If you need to back up incremental data on a regular basis, you can strategize a [dynamic partitioning](../../table_design/dynamic_partitioning.md) plan (by a certain time interval, for example) for your table, and back up only new partitions each time.
If you have stored a large amount of data in a table, we recommend that you back up and restore data by partition. This way, you can reduce the cost of retries in case of job failures. If you need to back up incremental data on a regular basis, you can strategize a [dynamic partitioning](../../table_design/data_distribution/dynamic_partitioning.md) plan (by a certain time interval, for example) for your table, and back up only new partitions each time.

### Create a repository

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
displayed_sidebar: docs
sidebar_position: 80
---

# Blacklist Management
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
displayed_sidebar: docs
sidebar_position: 60
---

# Load Balancing
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
displayed_sidebar: docs
sidebar_position: 40
---

# Memory Management
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
displayed_sidebar: docs
sidebar_position: 30
---

# Query Management
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
displayed_sidebar: docs
sidebar_position: 70
---

# Replica management
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
displayed_sidebar: docs
sidebar_position: 90
---

# File manager
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
displayed_sidebar: docs
sidebar_position: 20
---

# Query queues
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
displayed_sidebar: docs
sidebar_position: 10
---

# Resource group
Expand Down Expand Up @@ -60,15 +61,15 @@ You can specify CPU and memory resource quotas for a resource group on a BE by u

> **NOTE**
>
> The amount of memory that can be used for queries is indicated by the `query_pool` parameter. For more information about the parameter, see [Memory management](Memory_management.md).
> The amount of memory that can be used for queries is indicated by the `query_pool` parameter.
- `concurrency_limit`

This parameter specifies the upper limit of concurrent queries in a resource group. It is used to avoid system overload caused by too many concurrent queries. This parameter takes effect only when it is set greater than 0. Default: 0.

- `max_cpu_cores`

The CPU core threshold for triggering query queue in FE. For more details, refer to [Query queues - Specify resource thresholds for resource group-level query queues](./query_queues.md#specify-resource-thresholds-for-resource-group-level-query-queues). It takes effect only when it is set to greater than `0`. Range: [0, `avg_be_cpu_cores`], where `avg_be_cpu_cores` represents the average number of CPU cores across all BE nodes. Default: 0.
The CPU core threshold for triggering query queue in FE. This only takes effect when it is set to greater than `0`. Range: [0, `avg_be_cpu_cores`], where `avg_be_cpu_cores` represents the average number of CPU cores across all BE nodes. Default: 0.

- `spill_mem_limit_threshold`

Expand Down Expand Up @@ -360,9 +361,9 @@ The following FE metrics only provide statistics within the current FE node:
| starrocks_fe_query_resource_group | Count | Instantaneous | The number of queries historically run in this resource group (including those currently running). |
| starrocks_fe_query_resource_group_latency | ms | Instantaneous | The query latency percentile for this resource group. The label `type` indicates specific percentiles, including `mean`, `75_quantile`, `95_quantile`, `98_quantile`, `99_quantile`, `999_quantile`. |
| starrocks_fe_query_resource_group_err | Count | Instantaneous | The number of queries in this resource group that encountered an error. |
| starrocks_fe_resource_group_query_queue_total | Count | Instantaneous | The total number of queries historically queued in this resource group (including those currently running). This metric is supported from v3.1.4 onwards. It is valid only when query queues are enabled, see [Query Queues](query_queues.md) for details. |
| starrocks_fe_resource_group_query_queue_pending | Count | Instantaneous | The number of queries currently in the queue of this resource group. This metric is supported from v3.1.4 onwards. It is valid only when query queues are enabled, see [Query Queues](query_queues.md) for details. |
| starrocks_fe_resource_group_query_queue_timeout | Count | Instantaneous | The number of queries in this resource group that have timed out while in the queue. This metric is supported from v3.1.4 onwards. It is valid only when query queues are enabled, see [Query Queues](query_queues.md) for details. |
| starrocks_fe_resource_group_query_queue_total | Count | Instantaneous | The total number of queries historically queued in this resource group (including those currently running). This metric is supported from v3.1.4 onwards. It is valid only when query queues are enabled. |
| starrocks_fe_resource_group_query_queue_pending | Count | Instantaneous | The number of queries currently in the queue of this resource group. This metric is supported from v3.1.4 onwards. It is valid only when query queues are enabled. |
| starrocks_fe_resource_group_query_queue_timeout | Count | Instantaneous | The number of queries in this resource group that have timed out while in the queue. This metric is supported from v3.1.4 onwards. It is valid only when query queues are enabled. |

### BE metrics

Expand Down Expand Up @@ -412,11 +413,3 @@ MySQL [(none)]> SHOW USAGE RESOURCE GROUPS;
| wg2 | 0 | 127.0.0.1 | 0.400 | 4 | 8 |
+------------+----+-----------+-----------------+-----------------+------------------+
```

## What to do next

After you configure resource groups, you can manage memory resources and queries. For more information, see the following topics:

- [Memory management](./Memory_management.md)

- [Query management](./Query_management.md)

This file was deleted.

Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
displayed_sidebar: docs
sidebar_position: 50
---

# Spill to disk
Expand Down
2 changes: 1 addition & 1 deletion docs/en/deployment/post_deployment_setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,4 +90,4 @@ SET PROPERTY FOR '<username>' 'max_user_connections' = '1000';

## What to do next

After deploying and setting up your StarRocks cluster, you can then proceed to design tables that best work for your scenarios. See [Understand StarRocks table design](../table_design/Table_design.md) for detailed instructions on designing a table.
After deploying and setting up your StarRocks cluster, you can then proceed to design tables that best work for your scenarios. See [Understand StarRocks table design](../table_design/StarRocks_table_design.md) for detailed instructions on designing a table.
5 changes: 2 additions & 3 deletions docs/en/introduction/Architecture.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
---
displayed_sidebar: docs
---
import QSOverview from '../_assets/commonMarkdown/quickstart-overview-tip.mdx'

# Architecture

Expand Down Expand Up @@ -79,6 +80,4 @@ Queries against hot data scan the cache directly and then the local disk, while

Caching can be enabled when creating tables. If caching is enabled, data will be written to both the local disk and backend object storage. During queries, the CN nodes first read data from the local disk. If the data is not found, it will be retrieved from the backend object storage and simultaneously cached on the local disk.

## Learn by doing

- Give [shared-data](../quick_start/shared-data.md) a try using MinIO for object storage.
<QSOverview />
2 changes: 1 addition & 1 deletion docs/en/loading/BrokerLoad.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ Note that in StarRocks some literals are used as reserved keywords by the SQL la

> **NOTE**
>
> Since v2.5.7, StarRocks can automatically set the number of buckets (BUCKETS) when you create a table or add a partition. You no longer need to manually set the number of buckets. For detailed information, see [determine the number of buckets](../table_design/Data_distribution.md#determine-the-number-of-buckets).
> Since v2.5.7, StarRocks can automatically set the number of buckets (BUCKETS) when you create a table or add a partition. You no longer need to manually set the number of buckets. For detailed information, see [determine the number of buckets](../table_design/data_distribution/Data_distribution.md#determine-the-number-of-buckets).
a. Create a Primary Key table named `table1`. The table consists of three columns: `id`, `name`, and `score`, of which `id` is the primary key.

Expand Down
2 changes: 1 addition & 1 deletion docs/en/loading/Etl_in_loading.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ If you choose [Routine Load](./RoutineLoad.md), make sure that topics are create

> **NOTE**
>
> Since v2.5.7, StarRocks can automatically set the number of buckets (BUCKETS) when you create a table or add a partition. You no longer need to manually set the number of buckets. For detailed information, see [determine the number of buckets](../table_design/Data_distribution.md#determine-the-number-of-buckets).
> Since v2.5.7, StarRocks can automatically set the number of buckets (BUCKETS) when you create a table or add a partition. You no longer need to manually set the number of buckets. For detailed information, see [determine the number of buckets](../table_design/data_distribution/Data_distribution.md#determine-the-number-of-buckets).
a. Create a table named `table1`, which consists of three columns: `event_date`, `event_type`, and `user_id`.

Expand Down
2 changes: 1 addition & 1 deletion docs/en/loading/Flink_cdc_load.md
Original file line number Diff line number Diff line change
Expand Up @@ -286,7 +286,7 @@ To synchronize data from MySQL in real time, the system needs to read data from

> **NOTICE**
>
> Since v2.5.7, StarRocks can automatically set the number of buckets (BUCKETS) when you create a table or add a partition. You no longer need to manually set the number of buckets. For detailed information, see [determine the number of buckets](../table_design/Data_distribution.md#determine-the-number-of-buckets).
> Since v2.5.7, StarRocks can automatically set the number of buckets (BUCKETS) when you create a table or add a partition. You no longer need to manually set the number of buckets. For detailed information, see [determine the number of buckets](../table_design/data_distribution/Data_distribution.md#determine-the-number-of-buckets).

## Synchronize data

Expand Down
4 changes: 2 additions & 2 deletions docs/en/loading/InsertInto.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ StarRocks v2.4 further supports overwriting data into a table by using INSERT OV
- You can cancel a synchronous INSERT transaction only by pressing the **Ctrl** and **C** keys from your MySQL client.
- You can submit an asynchronous INSERT task using [SUBMIT TASK](../sql-reference/sql-statements/loading_unloading/ETL/SUBMIT_TASK.md).
- As for the current version of StarRocks, the INSERT transaction fails by default if the data of any rows does not comply with the schema of the table. For example, the INSERT transaction fails if the length of a field in any row exceeds the length limit for the mapping field in the table. You can set the session variable `enable_insert_strict` to `false` to allow the transaction to continue by filtering out the rows that mismatch the table.
- If you execute the INSERT statement frequently to load small batches of data into StarRocks, excessive data versions are generated. It severely affects query performance. We recommend that, in production, you should not load data with the INSERT command too often or use it as a routine for data loading on a daily basis. If your application or analytic scenario demand solutions to loading streaming data or small data batches separately, we recommend you use Apache Kafka® as your data source and load the data via [Routine Load](../loading/RoutineLoad.md).
- If you execute the INSERT statement frequently to load small batches of data into StarRocks, excessive data versions are generated. It severely affects query performance. We recommend that, in production, you should not load data with the INSERT command too often or use it as a routine for data loading on a daily basis. If your application or analytic scenario demand solutions to loading streaming data or small data batches separately, we recommend you use Apache Kafka® as your data source and load the data via Routine Load.
- If you execute the INSERT OVERWRITE statement, StarRocks creates temporary partitions for the partitions which store the original data, inserts new data into the temporary partitions, and [swaps the original partitions with the temporary partitions](../sql-reference/sql-statements/table_bucket_part_index/ALTER_TABLE.md#use-a-temporary-partition-to-replace-current-partition). All these operations are executed in the FE Leader node. Hence, if the FE Leader node crashes while executing INSERT OVERWRITE command, the whole load transaction will fail, and the temporary partitions will be truncated.

## Preparation
Expand Down Expand Up @@ -111,7 +111,7 @@ DISTRIBUTED BY HASH(user);

> **NOTICE**
>
> Since v2.5.7, StarRocks can automatically set the number of buckets (BUCKETS) when you create a table or add a partition. You no longer need to manually set the number of buckets. For detailed information, see [determine the number of buckets](../table_design/Data_distribution.md#determine-the-number-of-buckets).
> Since v2.5.7, StarRocks can automatically set the number of buckets (BUCKETS) when you create a table or add a partition. You no longer need to manually set the number of buckets. For detailed information, see [determine the number of buckets](../table_design/data_distribution/Data_distribution.md#determine-the-number-of-buckets).
## Insert data via INSERT INTO VALUES

Expand Down
Loading

0 comments on commit 498e1cd

Please sign in to comment.