Skip to content

Commit

Permalink
Merge pull request duckdb#3371 from deining/fix-typos
Browse files Browse the repository at this point in the history
Fix typos
  • Loading branch information
szarnyasg authored Aug 12, 2024
2 parents 24795ff + 2b50537 commit 3fd58f3
Show file tree
Hide file tree
Showing 25 changed files with 33 additions and 33 deletions.
2 changes: 1 addition & 1 deletion _posts/2022-01-06-time-zones.md
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,7 @@ SELECT era('2019-05-01 00:00:00+10'::TIMESTAMPTZ), era('2019-05-01 00:00:00+09':
### Caveats

ICU has some differences in behaviour and representation from the DuckDB implementation. These are hopefully minor issues that should only be of concern to serious time nerds.
* ICU represents instants as millisecond counts using a `double`. This makes it lose accuracy far from the epoch (e.g., around the first millenium)
* ICU represents instants as millisecond counts using a `double`. This makes it lose accuracy far from the epoch (e.g., around the first millennium)
* ICU uses the Julian calendar for dates before the Gregorian change on `1582-10-15` instead of the proleptic Gregorian calendar. This means that dates prior to the changeover will differ, although ICU will give the date as actually written at the time.
* ICU computes ages by using part increments instead of using the length of the earlier month like DuckDB and Postgres.

Expand Down
2 changes: 1 addition & 1 deletion docs/api/cli/dot_commands.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ Dot commands are available in the DuckDB CLI client. To use one of these command
| `.timer on|off` | Turn SQL timer on or off |
| `.width NUM1 NUM2 ...` | Set minimum column widths for columnar output |

## Using the `.help` Commmand
## Using the `.help` Command

The `.help` text may be filtered by passing in a text string as the second argument.

Expand Down
2 changes: 1 addition & 1 deletion docs/api/python/conversion.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ The rest of the conversion rules are as follows.
### `int`

Since integers can be of arbitrary size in Python, there is not a one-to-one conversion possible for ints.
Intead we perform these casts in order until one succeeds:
Instead we perform these casts in order until one succeeds:

* `BIGINT`
* `INTEGER`
Expand Down
2 changes: 1 addition & 1 deletion docs/api/python/data_ingestion.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ layout: docu
title: Data Ingestion
---

This page containes examples for data ingestion to Python using DuckDB. First, import the DuckDB page:
This page contains examples for data ingestion to Python using DuckDB. First, import the DuckDB page:

```python
import duckdb
Expand Down
2 changes: 1 addition & 1 deletion docs/api/python/expression.md
Original file line number Diff line number Diff line change
Expand Up @@ -168,5 +168,5 @@ When expressions are provided to `DuckDBPyRelation.order()`, the following order
|--------------------------------|----------------------------------------------------------------------------------------------------------------|
| `.asc()` | Indicates that this expression should be sorted in ascending order. |
| `.desc()` | Indicates that this expression should be sorted in descending order. |
| `.nulls_first()` | Indicates that the nulls in this expression should preceed the non-null values. |
| `.nulls_first()` | Indicates that the nulls in this expression should precede the non-null values. |
| `.nulls_last()` | Indicates that the nulls in this expression should come after the non-null values. |
2 changes: 1 addition & 1 deletion docs/api/wasm/extensions.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ WebAssembly is basically an additional platform, and there might be platform-spe

The HTTPFS extension is, at the moment, not available in DuckDB-Wasm. Https protocol capabilities needs to go through an additional layer, the browser, which adds both differences and some restrictions to what is doable from native.

Instead, DuckDB-Wasm has a separate implementation that for most purposes is interchangable, but does not support all use cases (as it must follow security rules imposed by the browser, such as CORS).
Instead, DuckDB-Wasm has a separate implementation that for most purposes is interchangeable, but does not support all use cases (as it must follow security rules imposed by the browser, such as CORS).
Due to this CORS restriction, any requests for data made using the HTTPFS extension must be to websites that allow (using CORS headers) the website hosting the DuckDB-Wasm instance to access that data.
The [MDN website](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) is a great resource for more information regarding CORS.

Expand Down
6 changes: 3 additions & 3 deletions docs/data/csv/reading_faulty_csv_files.md
Original file line number Diff line number Diff line change
Expand Up @@ -176,9 +176,9 @@ The CSV Reject Errors Table returns the following information:
|:--|:-----|:-|
| `scan_id` | The internal ID used in DuckDB to represent that scanner, used to join with reject scans tables | `UBIGINT` |
| `file_id` | The file_id represents a unique file in a scanner, used to join with reject scans tables | `UBIGINT` |
| `line` | Line number, from the CSV File, where the error occured. | `UBIGINT` |
| `line_byte_position` | Byte Position of the start of the line, where the error occured. | `UBIGINT` |
| `byte_position` | Byte Position where the error occured. | `UBIGINT` |
| `line` | Line number, from the CSV File, where the error occurred. | `UBIGINT` |
| `line_byte_position` | Byte Position of the start of the line, where the error occurred. | `UBIGINT` |
| `byte_position` | Byte Position where the error occurred. | `UBIGINT` |
| `column_idx` | If the error happens in a specific column, the index of the column. | `UBIGINT` |
| `column_name` | If the error happens in a specific column, the name of the column. | `VARCHAR` |
| `error_type` | The type of the error that happened. | `ENUM` |
Expand Down
2 changes: 1 addition & 1 deletion docs/dev/repositories.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Several components of DuckDB are maintained in separate repositories.

* [`dbt-duckdb`](https://github.com/duckdb/dbt-duckdb): dbt
* [`duckdb_mysql`](https://github.com/duckdb/duckdb_mysql): MySQL connector
* [`postgres_scanner`](https://github.com/duckdb/postgres_scanner): PostgresSQL connector
* [`postgres_scanner`](https://github.com/duckdb/postgres_scanner): PostgreSQL connector
* [`sqlite_scanner`](https://github.com/duckdb/sqlite_scanner): SQLite connector

## Extensions
Expand Down
2 changes: 1 addition & 1 deletion docs/extensions/aws.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ LOAD aws;

## Related Extensions

`aws` depends on `httpfs` extension capablities, and both will be autoloaded on the first call to `load_aws_credentials`.
`aws` depends on `httpfs` extension capabilities, and both will be autoloaded on the first call to `load_aws_credentials`.
If autoinstall or autoload are disabled, you can always explicitly install and load them as follows:

```sql
Expand Down
2 changes: 1 addition & 1 deletion docs/extensions/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ FROM 'https://raw.githubusercontent.com/duckdb/duckdb-web/main/data/weather.csv'

DuckDB will automatically install and load the [`httpfs`]({% link docs/extensions/httpfs/overview.md %}) extension. No explicit `INSTALL` or `LOAD` statements are required.

Not all extensions can be autoloaded. This can have various reasons: some extensions make several changes to the running DuckDB instance, making autoloading technically not (yet) possible. For others, it is prefered to have users opt-in to the extension explicitly before use due to the way they modify behaviour in DuckDB.
Not all extensions can be autoloaded. This can have various reasons: some extensions make several changes to the running DuckDB instance, making autoloading technically not (yet) possible. For others, it is preferred to have users opt-in to the extension explicitly before use due to the way they modify behaviour in DuckDB.

To see which extensions can be autoloaded, check the [core extensions list]({% link docs/extensions/core_extensions.md %}).

Expand Down
2 changes: 1 addition & 1 deletion docs/extensions/spatial.md
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ ST_Read(
* `allowed_drivers` (default: `[]`): A list of GDAL driver names that are allowed to be used to open the file. If empty, all drivers are allowed.
* `sibling_files` (default: `[]`): A list of sibling files that are required to open the file. E.g., the `ESRI Shapefile` driver requires a `.shx` file to be present. Although most of the time these can be discovered automatically.
* `spatial_filter_box` (default: `NULL`): If set to a `BOX_2D`, the table function will only return rows that intersect with the given bounding box. Similar to `spatial_filter`.
* `keep_wkb` (default: `false`): If set, the table function will return geometries in a `wkb_geometry` column with the type `WKB_BLOB` (which can be cast to `BLOB`) instead of `GEOMETRY`. This is useful if you want to use DuckDB with more exotic geometry subtypes that DuckDB spatial doesnt support representing in the `GEOMETRY` type yet.
* `keep_wkb` (default: `false`): If set, the table function will return geometries in a `wkb_geometry` column with the type `WKB_BLOB` (which can be cast to `BLOB`) instead of `GEOMETRY`. This is useful if you want to use DuckDB with more exotic geometry subtypes that DuckDB spatial doesn't support representing in the `GEOMETRY` type yet.

Note that GDAL is single-threaded, so this table function will not be able to make full use of parallelism. We're planning to implement support for the most common vector formats natively in this extension with additional table functions in the future.

Expand Down
4 changes: 2 additions & 2 deletions docs/extensions/vss.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ The following table shows the supported distance metrics and their corresponding
| `cosine` | `array_cosine_similarity` | Cosine similarity |
| `ip` | `array_inner_product` | Inner product |

Note that while each `HNSW` index only applies to a single column you can create multiple `HNSW` indexes on the same table each individually indexing a different column. Additionally, you can also create mulitple `HNSW` indexes to the same column, each supporting a different distance metric.
Note that while each `HNSW` index only applies to a single column you can create multiple `HNSW` indexes on the same table each individually indexing a different column. Additionally, you can also create multiple `HNSW` indexes to the same column, each supporting a different distance metric.

## Index options

Expand All @@ -91,7 +91,7 @@ Additionally, you can also override the `ef_search` parameter set at index const

Due to some known issues related to peristence of custom extension indexes, the `HNSW` index can only be created on tables in in-memory databases by default, unless the `SET hnsw_enable_experimental_persistence = ⟨bool⟩` configuration option is set to `true`.

The reasoning for locking this feature behind an experimental flag is that "WAL" recovery is not yet properly implemented for custom indexes, meaning that if a crash occurs or the database is shut down unexpectedly while there are uncommited changes to a `HNSW`-indexed table, you can end up with __data loss or corruption of the index__.
The reasoning for locking this feature behind an experimental flag is that "WAL" recovery is not yet properly implemented for custom indexes, meaning that if a crash occurs or the database is shut down unexpectedly while there are uncommitted changes to a `HNSW`-indexed table, you can end up with __data loss or corruption of the index__.

If you enable this option and experience an unexpected shutdown, you can try to recover the index by first starting DuckDB separately, loading the `vss` extension and then `ATTACH`ing the database file, which ensures that the `HNSW` index functionality is available during WAL-playback, allowing DuckDB's recovery process to proceed without issues. But we still recommend that you do not use this feature in production environments.

Expand Down
2 changes: 1 addition & 1 deletion docs/extensions/working_with_extensions.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ For example, if the file was available at the (relative) path `path/to/httpfs.du
LOAD 'path/to/httpfs.duckdb_extension';
```

This will skip any currently installed file in the specifed path.
This will skip any currently installed file in the specified path.

Using remote paths for compressed files is currently not possible.

Expand Down
2 changes: 1 addition & 1 deletion docs/guides/file_formats/read_file.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,4 +62,4 @@ In cases where the underlying filesystem is unable to provide some of this data

## Support for Projection Pushdown

The table functions also utilize projection pushdown to avoid computing properties unnecessarily. So you could e.g., use this to glob a directory full of huge files to get the file size in the size column, as long as you omit the content column the data wont be read into DuckDB.
The table functions also utilize projection pushdown to avoid computing properties unnecessarily. So you could e.g., use this to glob a directory full of huge files to get the file size in the size column, as long as you omit the content column the data won't be read into DuckDB.
4 changes: 2 additions & 2 deletions docs/guides/glossary.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@ This page contains a glossary of a few common terms used in DuckDB.

### In-Process Database Management System

The DBMS runs in the client application's process instead of running as a separate process, which is common in the traditional client–server setup. An alterative term is **embeddable** database management system. In general, the term _"embedded database management system"_ should be avoided, as it can be confused with DBMSs targeting _embedded systems_ (which run on e.g. microcontrollers).
The DBMS runs in the client application's process instead of running as a separate process, which is common in the traditional client–server setup. An alternative term is **embeddable** database management system. In general, the term _"embedded database management system"_ should be avoided, as it can be confused with DBMSs targeting _embedded systems_ (which run on e.g. microcontrollers).

### Replacement Scan

In DuckDB, replacement scans are used when a table name used by a query does not exist in the catalog. These scans can substitute another data source intead of the table. Using replacement scans allows DuckDB to, e.g., seamlessly read [Pandas DataFrames]({% link docs/guides/python/sql_on_pandas.md %}) or read input data from remote sources without explicitly invoking the functions that perform this (e.g., [reading Parquet files from https]({% link docs/guides/network_cloud_storage/http_import.md %})). For details, see the [C API - Replacement Scans page]({% link docs/api/c/replacement_scans.md %}).
In DuckDB, replacement scans are used when a table name used by a query does not exist in the catalog. These scans can substitute another data source instead of the table. Using replacement scans allows DuckDB to, e.g., seamlessly read [Pandas DataFrames]({% link docs/guides/python/sql_on_pandas.md %}) or read input data from remote sources without explicitly invoking the functions that perform this (e.g., [reading Parquet files from https]({% link docs/guides/network_cloud_storage/http_import.md %})). For details, see the [C API - Replacement Scans page]({% link docs/api/c/replacement_scans.md %}).

### Extension

Expand Down
2 changes: 1 addition & 1 deletion docs/guides/performance/my_workload_is_slow.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ If you find that your workload in DuckDB is slow, we recommend performing the fo
1. Are you using the correct types? For example, [use `TIMESTAMP` to encode datetime values]({% link docs/guides/performance/schema.md %}#types).
1. Are you reading from Parquet files? If so, do they have [row group sizes between 100k and 1M]({% link docs/guides/performance/file_formats.md %}#the-effect-of-row-group-sizes) and file sizes between 100MB to 10GB?
1. Does the query plan look right? Study it with [`EXPLAIN`]({% link docs/guides/performance/how_to_tune_workloads.md %}#profiling).
1. Is the workload running [in parallel]({% link docs/guides/performance/how_to_tune_workloads.md %}#paralellism)? Use `htop` or the operating system's task manager to observe this.
1. Is the workload running [in parallel]({% link docs/guides/performance/how_to_tune_workloads.md %}#parallelism)? Use `htop` or the operating system's task manager to observe this.
1. Is DuckDB using too many threads? Try [limiting the amount of threads]({% link docs/guides/performance/how_to_tune_workloads.md %}#parallelism-multi-core-processing).

Are you aware of other common issues? If so, please click the _Report content issue_ link below and describe them along with their workarounds.
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,14 @@ DuckDB creates the following global files and directories in the user's home dir

| Location | Description | Shared between versions | Shared between clients |
|-------|-------------------|--|--|
| `~/.duckdbrc` | The content of this file is executed when starting the [DuckDB CLI client]({% link docs/api/cli/overview.md %}). The commands can be both [dot commmand]({% link docs/api/cli/dot_commands.md %}) and SQL statements. The naming of this file follows the `~/.bashrc` and `~/.zshrc` "run commands" files. | Yes | Only used by CLI |
| `~/.duckdbrc` | The content of this file is executed when starting the [DuckDB CLI client]({% link docs/api/cli/overview.md %}). The commands can be both [dot command]({% link docs/api/cli/dot_commands.md %}) and SQL statements. The naming of this file follows the `~/.bashrc` and `~/.zshrc` "run commands" files. | Yes | Only used by CLI |
| `~/.duckdb_history` | History file, similar to `~/.bash_history` and `~/.zsh_history`. Used by the [DuckDB CLI client]({% link docs/api/cli/overview.md %}). | Yes | Only used by CLI |
| `~/.duckdb/extensions` | Binaries of installed [extensions]({% link docs/extensions/overview.md %}). | No | Yes |
| `~/.duckdb/stored_secrets` | [Persistent secrets]({% link docs/configuration/secrets_manager.md %}#persistent-secrets) created by the [Secrets manager]({% link docs/configuration/secrets_manager.md %}). | Yes | Yes |

## Local Files and Directories

DuckDB creates the following files and directories in the working directory (for in-memory connections) or relative to the database file (for pesistent connections):
DuckDB creates the following files and directories in the working directory (for in-memory connections) or relative to the database file (for persistent connections):

| Name | Description | Example |
|-------|-------------------|---|
Expand Down
4 changes: 2 additions & 2 deletions docs/sql/data_types/literal_types.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ Note that double quotes (`"`) cannot be used as string delimiter character: inst

### Implicit String Literal Concatenation

Consecutive single-quoted string literals sepearated only by whitespace that contains at least one newline are implicitly concatenated:
Consecutive single-quoted string literals separated only by whitespace that contains at least one newline are implicitly concatenated:

```sql
SELECT 'Hello'
Expand All @@ -81,7 +81,7 @@ They both return the following result:
|-------------|
| Hello World |

Note that implicit concatenation only works if there is at least one newline between the literals. Using adjacent string literals separated by whitspace without a newline results in a syntax error:
Note that implicit concatenation only works if there is at least one newline between the literals. Using adjacent string literals separated by whitespace without a newline results in a syntax error:

```sql
SELECT 'Hello' ' ' 'World' AS greeting;
Expand Down
2 changes: 1 addition & 1 deletion docs/sql/dialect/postgresql_compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ SELECT 'Infinity'::FLOAT - 1.0 AS x;

## Division on Integers

When computing division on integers, PostgreSQL performs integer divison, while DuckDB performs float division:
When computing division on integers, PostgreSQL performs integer division, while DuckDB performs float division:

```sql
SELECT 1 / 2 AS x;
Expand Down
4 changes: 2 additions & 2 deletions docs/sql/functions/interval.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ The table below shows the available scalar functions for `INTERVAL` types.
| [`to_decades(integer)`](#to_decadesinteger) | Construct a decade interval. |
| [`to_hours(integer)`](#to_hoursinteger) | Construct a hour interval. |
| [`to_microseconds(integer)`](#to_microsecondsinteger) | Construct a microsecond interval. |
| [`to_millennia(integer)`](#to_millenniainteger) | Construct a millenium interval. |
| [`to_millennia(integer)`](#to_millenniainteger) | Construct a millennium interval. |
| [`to_milliseconds(integer)`](#to_millisecondsinteger) | Construct a millisecond interval. |
| [`to_minutes(integer)`](#to_minutesinteger) | Construct a minute interval. |
| [`to_months(integer)`](#to_monthsinteger) | Construct a month interval. |
Expand Down Expand Up @@ -121,7 +121,7 @@ The table below shows the available scalar functions for `INTERVAL` types.

<div class="nostroke_table"></div>

| **Description** | Construct a millenium interval. |
| **Description** | Construct a millennium interval. |
| **Example** | `to_millennia(5)` |
| **Result** | `INTERVAL 5000 YEAR` |

Expand Down
Loading

0 comments on commit 3fd58f3

Please sign in to comment.