Skip to content

Commit

Permalink
Merge pull request #3539 from szarnyasg/quote-chars
Browse files Browse the repository at this point in the history
Replace straight quotation marks characters with proper double quotes
  • Loading branch information
szarnyasg authored Sep 5, 2024
2 parents 593ea69 + a8aab50 commit 7bf89cf
Show file tree
Hide file tree
Showing 13 changed files with 18 additions and 17 deletions.
1 change: 1 addition & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@ Some of this style guide is automated with GitHub Actions, but feel free to run
* Narrow tables – that do not span horizontally across the entire page – should be prepended with an empty div that has the `narrow_table` class: `<div class="narrow_table"></div>`.
* Do not introduce hard line breaks if possible. Therefore, avoid using the `<br/>` HTML tag and avoid [double spaces at the end of a line in Markdown](https://spec.commonmark.org/0.28/#hard-line-breaks).
* Single and double quote characters (`'` and `"`) are not converted to smart quotation marks automatically. To insert these, use ```` and ````.
* When referencing other articles, put their titles in quotes, e.g., `see the [“Lightweight Compression in DuckDB” blog post]({% post_url 2022-10-28-lightweight-compression %})`.
* For unordered lists, use `*`. If the list has multiple levels, use **4 spaces** for indentation.

> [!TIP]
Expand Down
2 changes: 1 addition & 1 deletion docs/api/python/data_ingestion.md
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ print(duckdb.sql("SELECT * FROM test_df").fetchall())
[(1, 'one'), (2, 'two'), (3, 'three'), (4, 'four')]
```

DuckDB also supports "registering" a DataFrame or Arrow object as a virtual table, comparable to a SQL `VIEW`. This is useful when querying a DataFrame/Arrow object that is stored in another way (as a class variable, or a value in a dictionary). Below is a Pandas example:
DuckDB also supports registering a DataFrame or Arrow object as a virtual table, comparable to a SQL `VIEW`. This is useful when querying a DataFrame/Arrow object that is stored in another way (as a class variable, or a value in a dictionary). Below is a Pandas example:

If your Pandas DataFrame is stored in another location, here is an example of manually registering it:

Expand Down
2 changes: 1 addition & 1 deletion docs/configuration/pragmas.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ This call returns the following information for the given table:
| `segment_type` | `VARCHAR` ||
| `start` | `BIGINT` | The start row id of this chunk |
| `count` | `BIGINT` | The amount of entries in this storage chunk |
| `compression` | `VARCHAR` | Compression type used for this column – see [blog post]({% post_url 2022-10-28-lightweight-compression %}) |
| `compression` | `VARCHAR` | Compression type used for this column – see the [“Lightweight Compression in DuckDB” blog post]({% post_url 2022-10-28-lightweight-compression %}) |
| `stats` | `VARCHAR` ||
| `has_updates` | `BOOLEAN` ||
| `persistent` | `BOOLEAN` | `false` if temporary table |
Expand Down
2 changes: 1 addition & 1 deletion docs/dev/building/building_extensions.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ With this flag enabled, when the assertion triggers it will instead directly cau

#### `DISABLE_STRING_INLINE`

In our execution format `string_t` has the feature to "inline" strings that are under a certain length (12 bytes), this means they don't require a separate allocation.
In our execution format `string_t` has the feature to inline strings that are under a certain length (12 bytes), this means they don't require a separate allocation.
When this flag is set, we disable this and don't inline small strings.

#### `DISABLE_MEMORY_SAFETY`
Expand Down
2 changes: 1 addition & 1 deletion docs/extensions/json.md
Original file line number Diff line number Diff line change
Expand Up @@ -343,7 +343,7 @@ You can read the same file with `records` set to `'false'`, to get a single colu
| {'duck': 42, 'goose': [1,2,3]} |
| {'duck': 43, 'goose': [4,5,6]} |

For additional examples reading more complex data, please see the [Shredding Deeply Nested JSON, One Vector at a Time blog post]({% post_url 2023-03-03-json %}).
For additional examples reading more complex data, please see the [Shredding Deeply Nested JSON, One Vector at a Time blog post]({% post_url 2023-03-03-json %}).

## JSON Import/Export

Expand Down
2 changes: 1 addition & 1 deletion docs/guides/database_integration/postgres.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ LOAD postgres;
After the `postgres` extension is installed, tables can be queried from PostgreSQL using the `postgres_scan` function:

```sql
-- scan the table "mytable" from the schema "public" in the database "mydb"
-- Scan the table "mytable" from the schema "public" in the database "mydb"
SELECT * FROM postgres_scan('host=localhost port=5432 dbname=mydb', 'public', 'mytable');
```

Expand Down
2 changes: 1 addition & 1 deletion docs/guides/file_formats/query_parquet.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,4 @@ SELECT * FROM read_parquet('input.parquet');

The Parquet file will be processed in parallel. Filters will be automatically pushed down into the Parquet scan, and only the relevant columns will be read automatically.

For more information see the blog post ["Querying Parquet with Precision using DuckDB"](/2021/06/25/querying-parquet).
For more information see the blog post [Querying Parquet with Precision using DuckDB](/2021/06/25/querying-parquet).
2 changes: 1 addition & 1 deletion docs/guides/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ The guides section contains compact how-to guides that are focused on achieving

Note that there are many tools using DuckDB, which are not covered in the official guides. To find a list of these tools, check out the [Awesome DuckDB repository](https://github.com/davidgasquez/awesome-duckdb).

> Tip For a short introductory tutorial, check out the [Analyzing Railway Traffic in the Netherlands]({% post_url 2024-05-31-analyzing-railway-traffic-in-the-netherlands %}) tutorial.
> Tip For a short introductory tutorial, check out the [Analyzing Railway Traffic in the Netherlands]({% post_url 2024-05-31-analyzing-railway-traffic-in-the-netherlands %}) tutorial.
## Data Import and Export

Expand Down
2 changes: 1 addition & 1 deletion docs/sql/data_types/numeric.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ For more complex mathematical operations, however, floating-point arithmetic is
In general, we advise that:

* If you require exact storage of numbers with a known number of decimal digits and require exact additions, subtractions, and multiplications (such as for monetary amounts), use the [`DECIMAL` data type](#fixed-point-decimals) or its `NUMERIC` alias instead.
* If you want to do fast or complicated calculations, the floating-point data types may be more appropriate. However, if you use the results for anything important, you should evaluate your implementation carefully for corner cases (ranges, infinities, underflows, invalid operations) that may be handled differently from what you expect and you should familiarize yourself with common floating-point pitfalls. The article ["What Every Computer Scientist Should Know About Floating-Point Arithmetic" by David Goldberg](https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html) and [the floating point series on Bruce Dawson's blog](https://randomascii.wordpress.com/2017/06/19/sometimes-floating-point-math-is-perfect/) provide excellent starting points.
* If you want to do fast or complicated calculations, the floating-point data types may be more appropriate. However, if you use the results for anything important, you should evaluate your implementation carefully for corner cases (ranges, infinities, underflows, invalid operations) that may be handled differently from what you expect and you should familiarize yourself with common floating-point pitfalls. The article [What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg](https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html) and [the floating point series on Bruce Dawson's blog](https://randomascii.wordpress.com/2017/06/19/sometimes-floating-point-math-is-perfect/) provide excellent starting points.

On most platforms, the `FLOAT` type has a range of at least 1E-37 to 1E+37 with a precision of at least 6 decimal digits. The `DOUBLE` type typically has a range of around 1E-307 to 1E+308 with a precision of at least 15 digits. Positive numbers outside of these ranges (and negative numbers ourside the mirrored ranges) may cause errors on some platforms but will usually be converted to zero or infinity, respectively.

Expand Down
6 changes: 3 additions & 3 deletions docs/sql/dialect/friendly_sql.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,6 @@ SELECT

## Related Blog Posts

* [Friendlier SQL with DuckDB]({% post_url 2022-05-04-friendlier-sql %}) blog post
* [Even Friendlier SQL with DuckDB]({% post_url 2023-08-23-even-friendlier-sql %}) blog post
* [SQL Gymnastics: Bending SQL into flexible new shapes]({% post_url 2024-03-01-sql-gymnastics %}) blog post
* [Friendlier SQL with DuckDB]({% post_url 2022-05-04-friendlier-sql %}) blog post
* [Even Friendlier SQL with DuckDB]({% post_url 2023-08-23-even-friendlier-sql %}) blog post
* [SQL Gymnastics: Bending SQL into Flexible New Shapes”]({% post_url 2024-03-01-sql-gymnastics %}) blog post
2 changes: 1 addition & 1 deletion docs/sql/functions/aggregates.md
Original file line number Diff line number Diff line change
Expand Up @@ -603,7 +603,7 @@ They all ignore `NULL` values (in the case of a single input column `x`), or pai

## Ordered Set Aggregate Functions

The table below shows the available "ordered set" aggregate functions.
The table below shows the available ordered set aggregate functions.
These functions are specified using the `WITHIN GROUP (ORDER BY sort_expression)` syntax,
and they are converted to an equivalent aggregate function that takes the ordering expression
as the first argument.
Expand Down
8 changes: 4 additions & 4 deletions docs/sql/functions/timestamptz.md
Original file line number Diff line number Diff line change
Expand Up @@ -411,8 +411,8 @@ Often the same functionality can be implemented more reliably using the `struct`
| [`current_localtimestamp()`](#current_localtimestamp) | Returns a `TIMESTAMP` whose GMT bin values correspond to local date and time in the current time zone. |
| [`localtime`](#localtime) | Synonym for the `current_localtime()` function call. |
| [`localtimestamp`](#localtimestamp) | Synonym for the `current_localtimestamp()` function call. |
| [`timezone(text, timestamp)`](#timezonetext-timestamp) | Use the [date parts]({% link docs/sql/functions/datepart.md %}) of the timestamp in GMT to construct a timestamp in the given time zone. Effectively, the argument is a "local" time. |
| [`timezone(text, timestamptz)`](#timezonetext-timestamptz) | Use the [date parts]({% link docs/sql/functions/datepart.md %}) of the timestamp in the given time zone to construct a timestamp. Effectively, the result is a "local" time. |
| [`timezone(text, timestamp)`](#timezonetext-timestamp) | Use the [date parts]({% link docs/sql/functions/datepart.md %}) of the timestamp in GMT to construct a timestamp in the given time zone. Effectively, the argument is a local time. |
| [`timezone(text, timestamptz)`](#timezonetext-timestamptz) | Use the [date parts]({% link docs/sql/functions/datepart.md %}) of the timestamp in the given time zone to construct a timestamp. Effectively, the result is a local time. |

#### `current_localtime()`

Expand Down Expand Up @@ -450,15 +450,15 @@ Often the same functionality can be implemented more reliably using the `struct`

<div class="nostroke_table"></div>

| **Description** | Use the [date parts]({% link docs/sql/functions/datepart.md %}) of the timestamp in GMT to construct a timestamp in the given time zone. Effectively, the argument is a "local" time. |
| **Description** | Use the [date parts]({% link docs/sql/functions/datepart.md %}) of the timestamp in GMT to construct a timestamp in the given time zone. Effectively, the argument is a local time. |
| **Example** | `timezone('America/Denver', TIMESTAMP '2001-02-16 20:38:40')` |
| **Result** | `2001-02-16 19:38:40-08` |

#### `timezone(text, timestamptz)`

<div class="nostroke_table"></div>

| **Description** | Use the [date parts]({% link docs/sql/functions/datepart.md %}) of the timestamp in the given time zone to construct a timestamp. Effectively, the result is a "local" time. |
| **Description** | Use the [date parts]({% link docs/sql/functions/datepart.md %}) of the timestamp in the given time zone to construct a timestamp. Effectively, the result is a local time. |
| **Example** | `timezone('America/Denver', TIMESTAMPTZ '2001-02-16 20:38:40-05')` |
| **Result** | `2001-02-16 18:38:40` |

Expand Down
2 changes: 1 addition & 1 deletion docs/sql/query_syntax/groupby.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The values of the grouping columns themselves are unchanged, and any other colum

Use `GROUP BY ALL` to `GROUP BY` all columns in the `SELECT` statement that are not wrapped in aggregate functions.
This simplifies the syntax by allowing the columns list to be maintained in a single location, and prevents bugs by keeping the `SELECT` granularity aligned to the `GROUP BY` granularity (Ex: Prevents any duplication).
See examples below and additional examples in the [Friendlier SQL with DuckDB blog post]({% post_url 2022-05-04-friendlier-sql %}#group-by-all).
See examples below and additional examples in the [Friendlier SQL with DuckDB blog post]({% post_url 2022-05-04-friendlier-sql %}#group-by-all).

## Multiple Dimensions

Expand Down

0 comments on commit 7bf89cf

Please sign in to comment.