diff --git a/website/docs/docs/build/incremental-models.md b/website/docs/docs/build/incremental-models.md
index 01a392c12fe..46788758ee6 100644
--- a/website/docs/docs/build/incremental-models.md
+++ b/website/docs/docs/build/incremental-models.md
@@ -249,31 +249,29 @@ The `merge` strategy is available in dbt-postgres and dbt-redshift beginning in
-
-| data platform adapter | default strategy | additional supported strategies |
-| :-------------------| ---------------- | -------------------- |
-| [dbt-postgres](/reference/resource-configs/postgres-configs#incremental-materialization-strategies) | `append` | `delete+insert` |
-| [dbt-redshift](/reference/resource-configs/redshift-configs#incremental-materialization-strategies) | `append` | `delete+insert` |
-| [dbt-bigquery](/reference/resource-configs/bigquery-configs#merge-behavior-incremental-models) | `merge` | `insert_overwrite` |
-| [dbt-spark](/reference/resource-configs/spark-configs#incremental-models) | `append` | `merge`, `insert_overwrite` |
-| [dbt-databricks](/reference/resource-configs/databricks-configs#incremental-models) | `merge` | `append`, `insert_overwrite` |
-| [dbt-snowflake](/reference/resource-configs/snowflake-configs#merge-behavior-incremental-models) | `merge` | `append`, `delete+insert` |
-| [dbt-trino](/reference/resource-configs/trino-configs#incremental) | `append` | `merge`, `delete+insert` |
+| data platform adapter | `append` | `merge` | `delete+insert` | `insert_overwrite` |
+|-----------------------------------------------------------------------------------------------------|:--------:|:-------:|:---------------:|:------------------:|
+| [dbt-postgres](/reference/resource-configs/postgres-configs#incremental-materialization-strategies) | ✅ | | ✅ | |
+| [dbt-redshift](/reference/resource-configs/redshift-configs#incremental-materialization-strategies) | ✅ | | ✅ | |
+| [dbt-bigquery](/reference/resource-configs/bigquery-configs#merge-behavior-incremental-models) | | ✅ | | ✅ |
+| [dbt-spark](/reference/resource-configs/spark-configs#incremental-models) | ✅ | ✅ | | ✅ |
+| [dbt-databricks](/reference/resource-configs/databricks-configs#incremental-models) | ✅ | ✅ | | ✅ |
+| [dbt-snowflake](/reference/resource-configs/snowflake-configs#merge-behavior-incremental-models) | ✅ | ✅ | ✅ | |
+| [dbt-trino](/reference/resource-configs/trino-configs#incremental) | ✅ | ✅ | ✅ | |
-
-| data platform adapter | default strategy | additional supported strategies |
-| :----------------- | :----------------| : ---------------------------------- |
-| [dbt-postgres](/reference/resource-configs/postgres-configs#incremental-materialization-strategies) | `append` | `merge` , `delete+insert` |
-| [dbt-redshift](/reference/resource-configs/redshift-configs#incremental-materialization-strategies) | `append` | `merge`, `delete+insert` |
-| [dbt-bigquery](/reference/resource-configs/bigquery-configs#merge-behavior-incremental-models) | `merge` | `insert_overwrite` |
-| [dbt-spark](/reference/resource-configs/spark-configs#incremental-models) | `append` | `merge`, `insert_overwrite` |
-| [dbt-databricks](/reference/resource-configs/databricks-configs#incremental-models) | `merge` | `append`, `insert_overwrite` |
-| [dbt-snowflake](/reference/resource-configs/snowflake-configs#merge-behavior-incremental-models) | `merge` | `append`, `delete+insert` |
-| [dbt-trino](/reference/resource-configs/trino-configs#incremental) | `append` | `merge`, `delete+insert` |
+| data platform adapter | `append` | `merge` | `delete+insert` | `insert_overwrite` |
+|-----------------------------------------------------------------------------------------------------|:--------:|:-------:|:---------------:|:------------------:|
+| [dbt-postgres](/reference/resource-configs/postgres-configs#incremental-materialization-strategies) | ✅ | ✅ | ✅ | |
+| [dbt-redshift](/reference/resource-configs/redshift-configs#incremental-materialization-strategies) | ✅ | ✅ | ✅ | |
+| [dbt-bigquery](/reference/resource-configs/bigquery-configs#merge-behavior-incremental-models) | | ✅ | | ✅ |
+| [dbt-spark](/reference/resource-configs/spark-configs#incremental-models) | ✅ | ✅ | | ✅ |
+| [dbt-databricks](/reference/resource-configs/databricks-configs#incremental-models) | ✅ | ✅ | | ✅ |
+| [dbt-snowflake](/reference/resource-configs/snowflake-configs#merge-behavior-incremental-models) | ✅ | ✅ | ✅ | |
+| [dbt-trino](/reference/resource-configs/trino-configs#incremental) | ✅ | ✅ | ✅ | |
diff --git a/website/docs/docs/collaborate/govern/model-contracts.md b/website/docs/docs/collaborate/govern/model-contracts.md
index 8e7598f8e3b..e3ea1e8c70c 100644
--- a/website/docs/docs/collaborate/govern/model-contracts.md
+++ b/website/docs/docs/collaborate/govern/model-contracts.md
@@ -91,7 +91,7 @@ When building a model with a defined contract, dbt will do two things differentl
Select the adapter-specific tab for more information on [constraint](/reference/resource-properties/constraints) support across platforms. Constraints fall into three categories based on support and platform enforcement:
- **Supported and enforced** — The model won't build if it violates the constraint.
-- **Supported and not enforced** — The platform supports specifying the type of constraint, but a model can still build even if building the model violates the constraint. This constraint exists for metadata purposes only. This is common for modern cloud data warehouses and less common for legacy databases.
+- **Supported and not enforced** — The platform supports specifying the type of constraint, but a model can still build even if building the model violates the constraint. This constraint exists for metadata purposes only. This approach is more typical in cloud data warehouses than in transactional databases, where strict rule enforcement is more common.
- **Not supported and not enforced** — You can't specify the type of constraint for the platform.
diff --git a/website/docs/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs.md b/website/docs/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs.md
index 30221332355..cb3a6804247 100644
--- a/website/docs/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs.md
+++ b/website/docs/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs.md
@@ -29,14 +29,6 @@ Using Databricks workflows to call the dbt Cloud job API can be useful for sever
- [Databricks CLI](https://docs.databricks.com/dev-tools/cli/index.html)
- **Note**: You only need to set up your authentication. Once you have set up your Host and Token and are able to run `databricks workspace ls /Users/`, you can proceed with the rest of this guide.
-## Configure Databricks workflows for dbt Cloud jobs
-
-To use Databricks workflows for running dbt Cloud jobs, you need to perform the following steps:
-
-- [Set up a Databricks secret scope](#set-up-a-databricks-secret-scope)
-- [Create a Databricks Python notebook](#create-a-databricks-python-notebook)
-- [Configure the workflows to run the dbt Cloud jobs](#configure-the-workflows-to-run-the-dbt-cloud-jobs)
-
## Set up a Databricks secret scope
1. Retrieve **[User API Token](https://docs.getdbt.com/docs/dbt-cloud-apis/user-tokens#user-api-tokens) **or **[Service Account Token](https://docs.getdbt.com/docs/dbt-cloud-apis/service-tokens#generating-service-account-tokens) **from dbt Cloud
diff --git a/website/docs/guides/snowflake-qs.md b/website/docs/guides/snowflake-qs.md
index abb18276b97..5b4f9e3e2be 100644
--- a/website/docs/guides/snowflake-qs.md
+++ b/website/docs/guides/snowflake-qs.md
@@ -462,7 +462,7 @@ Sources make it possible to name and describe the data loaded into your warehous
5. Execute `dbt run`.
- The results of your `dbt run` will be exactly the same as the previous step. Your `stg_cusutomers` and `stg_orders`
+ The results of your `dbt run` will be exactly the same as the previous step. Your `stg_customers` and `stg_orders`
models will still query from the same raw data source in Snowflake. By using `source`, you can
test and document your raw data and also understand the lineage of your sources.
diff --git a/website/docs/reference/resource-configs/databricks-configs.md b/website/docs/reference/resource-configs/databricks-configs.md
index fb3c9e7f5c3..677cad57ce6 100644
--- a/website/docs/reference/resource-configs/databricks-configs.md
+++ b/website/docs/reference/resource-configs/databricks-configs.md
@@ -38,9 +38,9 @@ When materializing a model as `table`, you may include several optional configs
## Incremental models
dbt-databricks plugin leans heavily on the [`incremental_strategy` config](/docs/build/incremental-models#about-incremental_strategy). This config tells the incremental materialization how to build models in runs beyond their first. It can be set to one of four values:
- - **`append`** (default): Insert new records without updating or overwriting any existing data.
+ - **`append`**: Insert new records without updating or overwriting any existing data.
- **`insert_overwrite`**: If `partition_by` is specified, overwrite partitions in the with new data. If no `partition_by` is specified, overwrite the entire table with new data.
- - **`merge`** (Delta and Hudi file format only): Match records based on a `unique_key`, updating old records, and inserting new ones. (If no `unique_key` is specified, all new data is inserted, similar to `append`.)
+ - **`merge`** (default; Delta and Hudi file format only): Match records based on a `unique_key`, updating old records, and inserting new ones. (If no `unique_key` is specified, all new data is inserted, similar to `append`.)
- **`replace_where`** (Delta file format only): Match records based on `incremental_predicates`, replacing all records that match the predicates from the existing table with records matching the predicates from the new data. (If no `incremental_predicates` are specified, all new data is inserted, similar to `append`.)
Each of these strategies has its pros and cons, which we'll discuss below. As with any model config, `incremental_strategy` may be specified in `dbt_project.yml` or within a model file's `config()` block.
@@ -49,8 +49,6 @@ Each of these strategies has its pros and cons, which we'll discuss below. As wi
Following the `append` strategy, dbt will perform an `insert into` statement with all new data. The appeal of this strategy is that it is straightforward and functional across all platforms, file types, connection methods, and Apache Spark versions. However, this strategy _cannot_ update, overwrite, or delete existing data, so it is likely to insert duplicate records for many data sources.
-Specifying `append` as the incremental strategy is optional, since it's the default strategy used when none is specified.
-
-- `append` (default)
-- `delete+insert`
+- `append` (default when `unique_key` is not defined)
+- `delete+insert` (default when `unique_key` is defined)
-- `append` (default)
+- `append` (default when `unique_key` is not defined)
- `merge`
-- `delete+insert`
+- `delete+insert` (default when `unique_key` is defined)
diff --git a/website/docs/reference/resource-configs/redshift-configs.md b/website/docs/reference/resource-configs/redshift-configs.md
index b559c0451b0..85b2af0c552 100644
--- a/website/docs/reference/resource-configs/redshift-configs.md
+++ b/website/docs/reference/resource-configs/redshift-configs.md
@@ -16,16 +16,16 @@ In dbt-redshift, the following incremental materialization strategies are suppor
-- `append` (default)
-- `delete+insert`
-
+- `append` (default when `unique_key` is not defined)
+- `delete+insert` (default when `unique_key` is defined)
+
-- `append` (default)
+- `append` (default when `unique_key` is not defined)
- `merge`
-- `delete+insert`
+- `delete+insert` (default when `unique_key` is defined)
diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/prod-settings.jpg b/website/static/img/docs/dbt-cloud/using-dbt-cloud/prod-settings.jpg
index 1239ea784de..04ec9280f14 100644
Binary files a/website/static/img/docs/dbt-cloud/using-dbt-cloud/prod-settings.jpg and b/website/static/img/docs/dbt-cloud/using-dbt-cloud/prod-settings.jpg differ