diff --git a/_posts/2022-01-06-time-zones.md b/_posts/2022-01-06-time-zones.md index c8e1eba8e42..fd07e6aac6e 100644 --- a/_posts/2022-01-06-time-zones.md +++ b/_posts/2022-01-06-time-zones.md @@ -222,7 +222,7 @@ SELECT era('2019-05-01 00:00:00+10'::TIMESTAMPTZ), era('2019-05-01 00:00:00+09': ### Caveats ICU has some differences in behaviour and representation from the DuckDB implementation. These are hopefully minor issues that should only be of concern to serious time nerds. -* ICU represents instants as millisecond counts using a `double`. This makes it lose accuracy far from the epoch (e.g., around the first millenium) +* ICU represents instants as millisecond counts using a `double`. This makes it lose accuracy far from the epoch (e.g., around the first millennium) * ICU uses the Julian calendar for dates before the Gregorian change on `1582-10-15` instead of the proleptic Gregorian calendar. This means that dates prior to the changeover will differ, although ICU will give the date as actually written at the time. * ICU computes ages by using part increments instead of using the length of the earlier month like DuckDB and Postgres. diff --git a/docs/api/cli/dot_commands.md b/docs/api/cli/dot_commands.md index aa0a288e829..4b697ef2c03 100644 --- a/docs/api/cli/dot_commands.md +++ b/docs/api/cli/dot_commands.md @@ -62,7 +62,7 @@ Dot commands are available in the DuckDB CLI client. To use one of these command | `.timer on|off` | Turn SQL timer on or off | | `.width NUM1 NUM2 ...` | Set minimum column widths for columnar output | -## Using the `.help` Commmand +## Using the `.help` Command The `.help` text may be filtered by passing in a text string as the second argument. diff --git a/docs/api/python/conversion.md b/docs/api/python/conversion.md index d990cd6d76a..dc70cd0c0b0 100644 --- a/docs/api/python/conversion.md +++ b/docs/api/python/conversion.md @@ -25,7 +25,7 @@ The rest of the conversion rules are as follows. ### `int` Since integers can be of arbitrary size in Python, there is not a one-to-one conversion possible for ints. -Intead we perform these casts in order until one succeeds: +Instead we perform these casts in order until one succeeds: * `BIGINT` * `INTEGER` diff --git a/docs/api/python/data_ingestion.md b/docs/api/python/data_ingestion.md index 1682b65e041..35813f65a6d 100644 --- a/docs/api/python/data_ingestion.md +++ b/docs/api/python/data_ingestion.md @@ -3,7 +3,7 @@ layout: docu title: Data Ingestion --- -This page containes examples for data ingestion to Python using DuckDB. First, import the DuckDB page: +This page contains examples for data ingestion to Python using DuckDB. First, import the DuckDB page: ```python import duckdb diff --git a/docs/api/python/expression.md b/docs/api/python/expression.md index 303a8c4c207..913f15767ad 100644 --- a/docs/api/python/expression.md +++ b/docs/api/python/expression.md @@ -168,5 +168,5 @@ When expressions are provided to `DuckDBPyRelation.order()`, the following order |--------------------------------|----------------------------------------------------------------------------------------------------------------| | `.asc()` | Indicates that this expression should be sorted in ascending order. | | `.desc()` | Indicates that this expression should be sorted in descending order. | -| `.nulls_first()` | Indicates that the nulls in this expression should preceed the non-null values. | +| `.nulls_first()` | Indicates that the nulls in this expression should precede the non-null values. | | `.nulls_last()` | Indicates that the nulls in this expression should come after the non-null values. | diff --git a/docs/api/wasm/extensions.md b/docs/api/wasm/extensions.md index 95e0a84018f..970134fe9cf 100644 --- a/docs/api/wasm/extensions.md +++ b/docs/api/wasm/extensions.md @@ -47,7 +47,7 @@ WebAssembly is basically an additional platform, and there might be platform-spe The HTTPFS extension is, at the moment, not available in DuckDB-Wasm. Https protocol capabilities needs to go through an additional layer, the browser, which adds both differences and some restrictions to what is doable from native. -Instead, DuckDB-Wasm has a separate implementation that for most purposes is interchangable, but does not support all use cases (as it must follow security rules imposed by the browser, such as CORS). +Instead, DuckDB-Wasm has a separate implementation that for most purposes is interchangeable, but does not support all use cases (as it must follow security rules imposed by the browser, such as CORS). Due to this CORS restriction, any requests for data made using the HTTPFS extension must be to websites that allow (using CORS headers) the website hosting the DuckDB-Wasm instance to access that data. The [MDN website](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) is a great resource for more information regarding CORS. diff --git a/docs/data/csv/reading_faulty_csv_files.md b/docs/data/csv/reading_faulty_csv_files.md index 178b471a659..c231ac22913 100644 --- a/docs/data/csv/reading_faulty_csv_files.md +++ b/docs/data/csv/reading_faulty_csv_files.md @@ -176,9 +176,9 @@ The CSV Reject Errors Table returns the following information: |:--|:-----|:-| | `scan_id` | The internal ID used in DuckDB to represent that scanner, used to join with reject scans tables | `UBIGINT` | | `file_id` | The file_id represents a unique file in a scanner, used to join with reject scans tables | `UBIGINT` | -| `line` | Line number, from the CSV File, where the error occured. | `UBIGINT` | -| `line_byte_position` | Byte Position of the start of the line, where the error occured. | `UBIGINT` | -| `byte_position` | Byte Position where the error occured. | `UBIGINT` | +| `line` | Line number, from the CSV File, where the error occurred. | `UBIGINT` | +| `line_byte_position` | Byte Position of the start of the line, where the error occurred. | `UBIGINT` | +| `byte_position` | Byte Position where the error occurred. | `UBIGINT` | | `column_idx` | If the error happens in a specific column, the index of the column. | `UBIGINT` | | `column_name` | If the error happens in a specific column, the name of the column. | `VARCHAR` | | `error_type` | The type of the error that happened. | `ENUM` | diff --git a/docs/dev/repositories.md b/docs/dev/repositories.md index c3414452739..bdd919f3072 100644 --- a/docs/dev/repositories.md +++ b/docs/dev/repositories.md @@ -28,7 +28,7 @@ Several components of DuckDB are maintained in separate repositories. * [`dbt-duckdb`](https://github.com/duckdb/dbt-duckdb): dbt * [`duckdb_mysql`](https://github.com/duckdb/duckdb_mysql): MySQL connector -* [`postgres_scanner`](https://github.com/duckdb/postgres_scanner): PostgresSQL connector +* [`postgres_scanner`](https://github.com/duckdb/postgres_scanner): PostgreSQL connector * [`sqlite_scanner`](https://github.com/duckdb/sqlite_scanner): SQLite connector ## Extensions diff --git a/docs/extensions/aws.md b/docs/extensions/aws.md index ad1f9a87363..d383eebbe47 100644 --- a/docs/extensions/aws.md +++ b/docs/extensions/aws.md @@ -18,7 +18,7 @@ LOAD aws; ## Related Extensions -`aws` depends on `httpfs` extension capablities, and both will be autoloaded on the first call to `load_aws_credentials`. +`aws` depends on `httpfs` extension capabilities, and both will be autoloaded on the first call to `load_aws_credentials`. If autoinstall or autoload are disabled, you can always explicitly install and load them as follows: ```sql diff --git a/docs/extensions/overview.md b/docs/extensions/overview.md index ace0e9365b4..99dd7fe927d 100644 --- a/docs/extensions/overview.md +++ b/docs/extensions/overview.md @@ -88,7 +88,7 @@ FROM 'https://raw.githubusercontent.com/duckdb/duckdb-web/main/data/weather.csv' DuckDB will automatically install and load the [`httpfs`]({% link docs/extensions/httpfs/overview.md %}) extension. No explicit `INSTALL` or `LOAD` statements are required. -Not all extensions can be autoloaded. This can have various reasons: some extensions make several changes to the running DuckDB instance, making autoloading technically not (yet) possible. For others, it is prefered to have users opt-in to the extension explicitly before use due to the way they modify behaviour in DuckDB. +Not all extensions can be autoloaded. This can have various reasons: some extensions make several changes to the running DuckDB instance, making autoloading technically not (yet) possible. For others, it is preferred to have users opt-in to the extension explicitly before use due to the way they modify behaviour in DuckDB. To see which extensions can be autoloaded, check the [core extensions list]({% link docs/extensions/core_extensions.md %}). diff --git a/docs/extensions/spatial.md b/docs/extensions/spatial.md index 11a842a9b43..fcd073104c7 100644 --- a/docs/extensions/spatial.md +++ b/docs/extensions/spatial.md @@ -179,7 +179,7 @@ ST_Read( * `allowed_drivers` (default: `[]`): A list of GDAL driver names that are allowed to be used to open the file. If empty, all drivers are allowed. * `sibling_files` (default: `[]`): A list of sibling files that are required to open the file. E.g., the `ESRI Shapefile` driver requires a `.shx` file to be present. Although most of the time these can be discovered automatically. * `spatial_filter_box` (default: `NULL`): If set to a `BOX_2D`, the table function will only return rows that intersect with the given bounding box. Similar to `spatial_filter`. -* `keep_wkb` (default: `false`): If set, the table function will return geometries in a `wkb_geometry` column with the type `WKB_BLOB` (which can be cast to `BLOB`) instead of `GEOMETRY`. This is useful if you want to use DuckDB with more exotic geometry subtypes that DuckDB spatial doesnt support representing in the `GEOMETRY` type yet. +* `keep_wkb` (default: `false`): If set, the table function will return geometries in a `wkb_geometry` column with the type `WKB_BLOB` (which can be cast to `BLOB`) instead of `GEOMETRY`. This is useful if you want to use DuckDB with more exotic geometry subtypes that DuckDB spatial doesn't support representing in the `GEOMETRY` type yet. Note that GDAL is single-threaded, so this table function will not be able to make full use of parallelism. We're planning to implement support for the most common vector formats natively in this extension with additional table functions in the future. diff --git a/docs/extensions/vss.md b/docs/extensions/vss.md index c303ac44606..78761aa6665 100644 --- a/docs/extensions/vss.md +++ b/docs/extensions/vss.md @@ -72,7 +72,7 @@ The following table shows the supported distance metrics and their corresponding | `cosine` | `array_cosine_similarity` | Cosine similarity | | `ip` | `array_inner_product` | Inner product | -Note that while each `HNSW` index only applies to a single column you can create multiple `HNSW` indexes on the same table each individually indexing a different column. Additionally, you can also create mulitple `HNSW` indexes to the same column, each supporting a different distance metric. +Note that while each `HNSW` index only applies to a single column you can create multiple `HNSW` indexes on the same table each individually indexing a different column. Additionally, you can also create multiple `HNSW` indexes to the same column, each supporting a different distance metric. ## Index options @@ -91,7 +91,7 @@ Additionally, you can also override the `ef_search` parameter set at index const Due to some known issues related to peristence of custom extension indexes, the `HNSW` index can only be created on tables in in-memory databases by default, unless the `SET hnsw_enable_experimental_persistence = ⟨bool⟩` configuration option is set to `true`. -The reasoning for locking this feature behind an experimental flag is that "WAL" recovery is not yet properly implemented for custom indexes, meaning that if a crash occurs or the database is shut down unexpectedly while there are uncommited changes to a `HNSW`-indexed table, you can end up with __data loss or corruption of the index__. +The reasoning for locking this feature behind an experimental flag is that "WAL" recovery is not yet properly implemented for custom indexes, meaning that if a crash occurs or the database is shut down unexpectedly while there are uncommitted changes to a `HNSW`-indexed table, you can end up with __data loss or corruption of the index__. If you enable this option and experience an unexpected shutdown, you can try to recover the index by first starting DuckDB separately, loading the `vss` extension and then `ATTACH`ing the database file, which ensures that the `HNSW` index functionality is available during WAL-playback, allowing DuckDB's recovery process to proceed without issues. But we still recommend that you do not use this feature in production environments. diff --git a/docs/extensions/working_with_extensions.md b/docs/extensions/working_with_extensions.md index 4ff780e40e2..08720c57ca9 100644 --- a/docs/extensions/working_with_extensions.md +++ b/docs/extensions/working_with_extensions.md @@ -216,7 +216,7 @@ For example, if the file was available at the (relative) path `path/to/httpfs.du LOAD 'path/to/httpfs.duckdb_extension'; ``` -This will skip any currently installed file in the specifed path. +This will skip any currently installed file in the specified path. Using remote paths for compressed files is currently not possible. diff --git a/docs/guides/file_formats/read_file.md b/docs/guides/file_formats/read_file.md index b97294f3331..6520e46b6e0 100644 --- a/docs/guides/file_formats/read_file.md +++ b/docs/guides/file_formats/read_file.md @@ -62,4 +62,4 @@ In cases where the underlying filesystem is unable to provide some of this data ## Support for Projection Pushdown -The table functions also utilize projection pushdown to avoid computing properties unnecessarily. So you could e.g., use this to glob a directory full of huge files to get the file size in the size column, as long as you omit the content column the data wont be read into DuckDB. +The table functions also utilize projection pushdown to avoid computing properties unnecessarily. So you could e.g., use this to glob a directory full of huge files to get the file size in the size column, as long as you omit the content column the data won't be read into DuckDB. diff --git a/docs/guides/glossary.md b/docs/guides/glossary.md index 3d976340dce..abd1969f73f 100644 --- a/docs/guides/glossary.md +++ b/docs/guides/glossary.md @@ -9,11 +9,11 @@ This page contains a glossary of a few common terms used in DuckDB. ### In-Process Database Management System -The DBMS runs in the client application's process instead of running as a separate process, which is common in the traditional client–server setup. An alterative term is **embeddable** database management system. In general, the term _"embedded database management system"_ should be avoided, as it can be confused with DBMSs targeting _embedded systems_ (which run on e.g. microcontrollers). +The DBMS runs in the client application's process instead of running as a separate process, which is common in the traditional client–server setup. An alternative term is **embeddable** database management system. In general, the term _"embedded database management system"_ should be avoided, as it can be confused with DBMSs targeting _embedded systems_ (which run on e.g. microcontrollers). ### Replacement Scan -In DuckDB, replacement scans are used when a table name used by a query does not exist in the catalog. These scans can substitute another data source intead of the table. Using replacement scans allows DuckDB to, e.g., seamlessly read [Pandas DataFrames]({% link docs/guides/python/sql_on_pandas.md %}) or read input data from remote sources without explicitly invoking the functions that perform this (e.g., [reading Parquet files from https]({% link docs/guides/network_cloud_storage/http_import.md %})). For details, see the [C API - Replacement Scans page]({% link docs/api/c/replacement_scans.md %}). +In DuckDB, replacement scans are used when a table name used by a query does not exist in the catalog. These scans can substitute another data source instead of the table. Using replacement scans allows DuckDB to, e.g., seamlessly read [Pandas DataFrames]({% link docs/guides/python/sql_on_pandas.md %}) or read input data from remote sources without explicitly invoking the functions that perform this (e.g., [reading Parquet files from https]({% link docs/guides/network_cloud_storage/http_import.md %})). For details, see the [C API - Replacement Scans page]({% link docs/api/c/replacement_scans.md %}). ### Extension diff --git a/docs/guides/performance/my_workload_is_slow.md b/docs/guides/performance/my_workload_is_slow.md index 7df557fc5f0..67570380b9a 100644 --- a/docs/guides/performance/my_workload_is_slow.md +++ b/docs/guides/performance/my_workload_is_slow.md @@ -13,7 +13,7 @@ If you find that your workload in DuckDB is slow, we recommend performing the fo 1. Are you using the correct types? For example, [use `TIMESTAMP` to encode datetime values]({% link docs/guides/performance/schema.md %}#types). 1. Are you reading from Parquet files? If so, do they have [row group sizes between 100k and 1M]({% link docs/guides/performance/file_formats.md %}#the-effect-of-row-group-sizes) and file sizes between 100MB to 10GB? 1. Does the query plan look right? Study it with [`EXPLAIN`]({% link docs/guides/performance/how_to_tune_workloads.md %}#profiling). -1. Is the workload running [in parallel]({% link docs/guides/performance/how_to_tune_workloads.md %}#paralellism)? Use `htop` or the operating system's task manager to observe this. +1. Is the workload running [in parallel]({% link docs/guides/performance/how_to_tune_workloads.md %}#parallelism)? Use `htop` or the operating system's task manager to observe this. 1. Is DuckDB using too many threads? Try [limiting the amount of threads]({% link docs/guides/performance/how_to_tune_workloads.md %}#parallelism-multi-core-processing). Are you aware of other common issues? If so, please click the _Report content issue_ link below and describe them along with their workarounds. diff --git a/docs/operations_manual/footprint_of_duckdb/files_created_by_duckdb.md b/docs/operations_manual/footprint_of_duckdb/files_created_by_duckdb.md index fb9f64f188d..67354408662 100644 --- a/docs/operations_manual/footprint_of_duckdb/files_created_by_duckdb.md +++ b/docs/operations_manual/footprint_of_duckdb/files_created_by_duckdb.md @@ -11,14 +11,14 @@ DuckDB creates the following global files and directories in the user's home dir | Location | Description | Shared between versions | Shared between clients | |-------|-------------------|--|--| -| `~/.duckdbrc` | The content of this file is executed when starting the [DuckDB CLI client]({% link docs/api/cli/overview.md %}). The commands can be both [dot commmand]({% link docs/api/cli/dot_commands.md %}) and SQL statements. The naming of this file follows the `~/.bashrc` and `~/.zshrc` "run commands" files. | Yes | Only used by CLI | +| `~/.duckdbrc` | The content of this file is executed when starting the [DuckDB CLI client]({% link docs/api/cli/overview.md %}). The commands can be both [dot command]({% link docs/api/cli/dot_commands.md %}) and SQL statements. The naming of this file follows the `~/.bashrc` and `~/.zshrc` "run commands" files. | Yes | Only used by CLI | | `~/.duckdb_history` | History file, similar to `~/.bash_history` and `~/.zsh_history`. Used by the [DuckDB CLI client]({% link docs/api/cli/overview.md %}). | Yes | Only used by CLI | | `~/.duckdb/extensions` | Binaries of installed [extensions]({% link docs/extensions/overview.md %}). | No | Yes | | `~/.duckdb/stored_secrets` | [Persistent secrets]({% link docs/configuration/secrets_manager.md %}#persistent-secrets) created by the [Secrets manager]({% link docs/configuration/secrets_manager.md %}). | Yes | Yes | ## Local Files and Directories -DuckDB creates the following files and directories in the working directory (for in-memory connections) or relative to the database file (for pesistent connections): +DuckDB creates the following files and directories in the working directory (for in-memory connections) or relative to the database file (for persistent connections): | Name | Description | Example | |-------|-------------------|---| diff --git a/docs/sql/data_types/literal_types.md b/docs/sql/data_types/literal_types.md index c8062bca2d4..214ec1dfbf0 100644 --- a/docs/sql/data_types/literal_types.md +++ b/docs/sql/data_types/literal_types.md @@ -59,7 +59,7 @@ Note that double quotes (`"`) cannot be used as string delimiter character: inst ### Implicit String Literal Concatenation -Consecutive single-quoted string literals sepearated only by whitespace that contains at least one newline are implicitly concatenated: +Consecutive single-quoted string literals separated only by whitespace that contains at least one newline are implicitly concatenated: ```sql SELECT 'Hello' @@ -81,7 +81,7 @@ They both return the following result: |-------------| | Hello World | -Note that implicit concatenation only works if there is at least one newline between the literals. Using adjacent string literals separated by whitspace without a newline results in a syntax error: +Note that implicit concatenation only works if there is at least one newline between the literals. Using adjacent string literals separated by whitespace without a newline results in a syntax error: ```sql SELECT 'Hello' ' ' 'World' AS greeting; diff --git a/docs/sql/dialect/postgresql_compatibility.md b/docs/sql/dialect/postgresql_compatibility.md index 0d9cdee3387..d4937eba94f 100644 --- a/docs/sql/dialect/postgresql_compatibility.md +++ b/docs/sql/dialect/postgresql_compatibility.md @@ -38,7 +38,7 @@ SELECT 'Infinity'::FLOAT - 1.0 AS x; ## Division on Integers -When computing division on integers, PostgreSQL performs integer divison, while DuckDB performs float division: +When computing division on integers, PostgreSQL performs integer division, while DuckDB performs float division: ```sql SELECT 1 / 2 AS x; diff --git a/docs/sql/functions/interval.md b/docs/sql/functions/interval.md index 6dd040f436a..441887eb19a 100644 --- a/docs/sql/functions/interval.md +++ b/docs/sql/functions/interval.md @@ -35,7 +35,7 @@ The table below shows the available scalar functions for `INTERVAL` types. | [`to_decades(integer)`](#to_decadesinteger) | Construct a decade interval. | | [`to_hours(integer)`](#to_hoursinteger) | Construct a hour interval. | | [`to_microseconds(integer)`](#to_microsecondsinteger) | Construct a microsecond interval. | -| [`to_millennia(integer)`](#to_millenniainteger) | Construct a millenium interval. | +| [`to_millennia(integer)`](#to_millenniainteger) | Construct a millennium interval. | | [`to_milliseconds(integer)`](#to_millisecondsinteger) | Construct a millisecond interval. | | [`to_minutes(integer)`](#to_minutesinteger) | Construct a minute interval. | | [`to_months(integer)`](#to_monthsinteger) | Construct a month interval. | @@ -121,7 +121,7 @@ The table below shows the available scalar functions for `INTERVAL` types.
-| **Description** | Construct a millenium interval. | +| **Description** | Construct a millennium interval. | | **Example** | `to_millennia(5)` | | **Result** | `INTERVAL 5000 YEAR` | diff --git a/docs/sql/functions/list.md b/docs/sql/functions/list.md index 3119fc71f38..446e32ec7de 100644 --- a/docs/sql/functions/list.md +++ b/docs/sql/functions/list.md @@ -420,7 +420,7 @@ FROM (VALUES (['Hello', '', 'World'])) t(strings); ## Range Functions -DuckDB offers two range functions, [`range(start, stop, step)`](#range) and [`generate_series(start, stop, step)`](#generate_series), and their variants with default arguments for `stop` and `step`. The two functions' behavior differens regarding their `stop` argument. This is documented below. +DuckDB offers two range functions, [`range(start, stop, step)`](#range) and [`generate_series(start, stop, step)`](#generate_series), and their variants with default arguments for `stop` and `step`. The two functions' behavior is different regarding their `stop` argument. This is documented below. #### `range` diff --git a/docs/sql/functions/window_functions.md b/docs/sql/functions/window_functions.md index f1c0e9d4a54..8dc15525159 100644 --- a/docs/sql/functions/window_functions.md +++ b/docs/sql/functions/window_functions.md @@ -180,7 +180,7 @@ The `first` and `last` aggregate functions are shadowed by the respective genera All [general-purpose window functions](#general-purpose-window-functions) that accept `IGNORE NULLS` respect nulls by default. This default behavior can optionally be made explicit via `RESPECT NULLS`. -In contrast, all [aggregate window functions](#aggregate-window-functions) (except for `list` and its aliases, which can be made to ignore nulls via a `FILTER`) ignore nulls and do not accept `RESPECT NULLS`. For example, `sum(column) OVER (ORDER BY time) AS cumulativeColumn` computes a cumulative sum where rows with a `NULL` value of `column` have the same value of `cumulativeColumn` as the row that preceeds them. +In contrast, all [aggregate window functions](#aggregate-window-functions) (except for `list` and its aliases, which can be made to ignore nulls via a `FILTER`) ignore nulls and do not accept `RESPECT NULLS`. For example, `sum(column) OVER (ORDER BY time) AS cumulativeColumn` computes a cumulative sum where rows with a `NULL` value of `column` have the same value of `cumulativeColumn` as the row that precedes them. ## Evaluation diff --git a/docs/sql/statements/copy.md b/docs/sql/statements/copy.md index dc7c66b9dca..b66d1cc060f 100644 --- a/docs/sql/statements/copy.md +++ b/docs/sql/statements/copy.md @@ -319,7 +319,7 @@ COPY (FIELD_IDS {my_list: {__duckdb_field_id: 42, element: 43}}); ``` -Sets the `field_id` of colum `my_map` to 42, and columns `key` and `value` (default names of map children) to 43 and 44: +Sets the `field_id` of column `my_map` to 42, and columns `key` and `value` (default names of map children) to 43 and 44: ```sql COPY diff --git a/docs/sql/tutorial/index.html b/docs/sql/tutorial/index.html index 103b18bacb2..cf25dc374ea 100644 --- a/docs/sql/tutorial/index.html +++ b/docs/sql/tutorial/index.html @@ -161,14 +161,14 @@

Introduction

You may have lost Internet connectivity.

- SQL is a declarative data manipulation languate. Simple SELECT queries follow the general form of + SQL is a declarative data manipulation language. Simple SELECT queries follow the general form of SELECT [projection] FROM [tables] WHERE [selection] .