diff --git a/website/docs/best-practices/clone-incremental-models.md b/website/docs/best-practices/clone-incremental-models.md
new file mode 100644
index 00000000000..4096af489ab
--- /dev/null
+++ b/website/docs/best-practices/clone-incremental-models.md
@@ -0,0 +1,79 @@
+---
+title: "Clone incremental models as the first step of your CI job"
+id: "clone-incremental-models"
+description: Learn how to define clone incremental models as the first step of your CI job.
+displayText: Clone incremental models as the first step of your CI job
+hoverSnippet: Learn how to clone incremental models for CI jobs.
+---
+
+Before you begin, you must be aware of a few conditions:
+- `dbt clone` is only available with dbt version 1.6 and newer. Refer to our [upgrade guide](/docs/dbt-versions/upgrade-core-in-cloud) for help enabling newer versions in dbt Cloud
+- This strategy only works for warehouse that support zero copy cloning (otherwise `dbt clone` will just create pointer views).
+- Some teams may want to test that their incremental models run in both incremental mode and full-refresh mode.
+
+Imagine you've created a [Slim CI job](/docs/deploy/continuous-integration) in dbt Cloud and it is configured to:
+
+- Defer to your production environment.
+- Run the command `dbt build --select state:modified+` to run and test all of the models you've modified and their downstream dependencies.
+- Trigger whenever a developer on your team opens a PR against the main branch.
+
+
+
+Now imagine your dbt project looks something like this in the DAG:
+
+
+
+When you open a pull request (PR) that modifies `dim_wizards`, your CI job will kickoff and build _only the modified models and their downstream dependencies_ (in this case, `dim_wizards` and `fct_orders`) into a temporary schema that's unique to your PR.
+
+This build mimics the behavior of what will happen once the PR is merged into the main branch. It ensures you're not introducing breaking changes, without needing to build your entire dbt project.
+
+## What happens when one of the modified models (or one of their downstream dependencies) is an incremental model?
+
+Because your CI job is building modified models into a PR-specific schema, on the first execution of `dbt build --select state:modified+`, the modified incremental model will be built in its entirety _because it does not yet exist in the PR-specific schema_ and [is_incremental will be false](/docs/build/incremental-models#understanding-the-is_incremental-macro). You're running in `full-refresh` mode.
+
+This can be suboptimal because:
+- Typically incremental models are your largest datasets, so they take a long time to build in their entirety which can slow down development time and incur high warehouse costs.
+- There are situations where a `full-refresh` of the incremental model passes successfully in your CI job but an _incremental_ build of that same table in prod would fail when the PR is merged into main (think schema drift where [on_schema_change](/docs/build/incremental-models#what-if-the-columns-of-my-incremental-model-change) config is set to `fail`)
+
+You can alleviate these problems by zero copy cloning the relevant, pre-exisitng incremental models into your PR-specific schema as the first step of the CI job using the `dbt clone` command. This way, the incremental models already exist in the PR-specific schema when you first execute the command `dbt build --select state:modified+` so the `is_incremental` flag will be `true`.
+
+You'll have two commands for your dbt Cloud CI check to execute:
+1. Clone all of the pre-existing incremental models that have been modified or are downstream of another model that has been modified: `dbt clone --select state:modified+,config.materialized:incremental,state:old`
+2. Build all of the models that have been modified and their downstream dependencies: `dbt build --select state:modified+`
+
+Because of your first clone step, the incremental models selected in your `dbt build` on the second step will run in incremental mode.
+
+
+
+Your CI jobs will run faster, and you're more accurately mimicking the behavior of what will happen once the PR has been merged into main.
+
+### Expansion on "think schema drift" where [on_schema_change](/docs/build/incremental-models#what-if-the-columns-of-my-incremental-model-change) config is set to `fail`" from above
+
+Imagine you have an incremental model `my_incremental_model` with the following config:
+
+```sql
+
+{{
+ config(
+ materialized='incremental',
+ unique_key='unique_id',
+ on_schema_change='fail'
+ )
+}}
+
+```
+
+Now, let’s say you open up a PR that adds a new column to `my_incremental_model`. In this case:
+- An incremental build will fail.
+- A `full-refresh` will succeed.
+
+If you have a daily production job that just executes `dbt build` without a `--full-refresh` flag, once the PR is merged into main and the job kicks off, you will get a failure. So the question is - what do you want to happen in CI?
+- Do you want to also get a failure in CI, so that you know that once this PR is merged into main you need to immediately execute a `dbt build --full-refresh --select my_incremental_model` in production in order to avoid a failure in prod? This will block your CI check from passing.
+- Do you want your CI check to succeed, because once you do run a `full-refresh` for this model in prod you will be in a successful state? This may lead unpleasant surprises if your production job is suddenly failing when you merge this PR into main if you don’t remember you need to execute a `dbt build --full-refresh --select my_incremental_model` in production.
+
+There’s probably no perfect solution here; it’s all just tradeoffs! Our preference would be to have the failing CI job and have to manually override the blocking branch protection rule so that there are no surprises and we can proactively run the appropriate command in production once the PR is merged.
+
+### Expansion on "why `state:old`"
+
+For brand new incremental models, you want them to run in `full-refresh` mode in CI, because they will run in `full-refresh` mode in production when the PR is merged into `main`. They also don't exist yet in the production environment... they're brand new!
+If you don't specify this, you won't get an error just a “No relation found in state manifest for…”. So, it technically works without specifying `state:old` but adding `state:old` is more explicit and means it won't even try to clone the brand new incremental models.
diff --git a/website/docs/docs/build/cumulative-metrics.md b/website/docs/docs/build/cumulative-metrics.md
index 708045c1f3e..45a136df751 100644
--- a/website/docs/docs/build/cumulative-metrics.md
+++ b/website/docs/docs/build/cumulative-metrics.md
@@ -38,10 +38,7 @@ metrics:
## Limitations
Cumulative metrics are currently under active development and have the following limitations:
-
-1. You can only use the [`metric_time` dimension](/docs/build/dimensions#time) to check cumulative metrics. If you don't use `metric_time` in the query, the cumulative metric will return incorrect results because it won't perform the time spine join. This means you cannot reference time dimensions other than the `metric_time` in the query.
-2. If you use `metric_time` in your query filter but don't include "start_time" and "end_time," cumulative metrics will left-censor the input data. For example, if you query a cumulative metric with a 7-day window with the filter `{{ TimeDimension('metric_time') }} BETWEEN '2023-08-15' AND '2023-08-30' `, the values for `2023-08-15` to `2023-08-20` return missing or incomplete data. This is because we apply the `metric_time` filter to the aggregation input. To avoid this, you must use `start_time` and `end_time` in the query filter.
-
+- You are required to use [`metric_time` dimension](/docs/build/dimensions#time) when querying cumulative metrics. If you don't use `metric_time` in the query, the cumulative metric will return incorrect results because it won't perform the time spine join. This means you cannot reference time dimensions other than the `metric_time` in the query.
## Cumulative metrics example
diff --git a/website/docs/docs/build/dimensions.md b/website/docs/docs/build/dimensions.md
index b8679fe11b0..683ff730d3c 100644
--- a/website/docs/docs/build/dimensions.md
+++ b/website/docs/docs/build/dimensions.md
@@ -15,7 +15,8 @@ In a data platform, dimensions is part of a larger structure called a semantic m
Groups are defined within semantic models, alongside entities and measures, and correspond to non-aggregatable columns in your dbt model that provides categorical or time-based context. In SQL, dimensions is typically included in the GROUP BY clause.-->
-All dimensions require a `name`, `type` and in some cases, an `expr` parameter.
+All dimensions require a `name`, `type` and in some cases, an `expr` parameter. The `name` for your dimension must be unique to the semantic model and can not be the same as an existing `entity` or `measure` within that same model.
+
| Parameter | Description | Type |
| --------- | ----------- | ---- |
diff --git a/website/docs/docs/build/entities.md b/website/docs/docs/build/entities.md
index 464fa2c3b8c..e44f9e79af6 100644
--- a/website/docs/docs/build/entities.md
+++ b/website/docs/docs/build/entities.md
@@ -8,7 +8,7 @@ tags: [Metrics, Semantic Layer]
Entities are real-world concepts in a business such as customers, transactions, and ad campaigns. We often focus our analyses around specific entities, such as customer churn or annual recurring revenue modeling. We represent entities in our semantic models using id columns that serve as join keys to other semantic models in your semantic graph.
-Within a semantic graph, the required parameters for an entity are `name` and `type`. The `name` refers to either the key column name from the underlying data table, or it may serve as an alias with the column name referenced in the `expr` parameter.
+Within a semantic graph, the required parameters for an entity are `name` and `type`. The `name` refers to either the key column name from the underlying data table, or it may serve as an alias with the column name referenced in the `expr` parameter. The `name` for your entity must be unique to the semantic model and can not be the same as an existing `measure` or `dimension` within that same model.
Entities can be specified with a single column or multiple columns. Entities (join keys) in a semantic model are identified by their name. Each entity name must be unique within a semantic model, but it doesn't have to be unique across different semantic models.
diff --git a/website/docs/docs/build/measures.md b/website/docs/docs/build/measures.md
index 74d37b70e94..feea2b30ca4 100644
--- a/website/docs/docs/build/measures.md
+++ b/website/docs/docs/build/measures.md
@@ -34,7 +34,8 @@ measures:
When you create a measure, you can either give it a custom name or use the `name` of the data platform column directly. If the `name` of the measure is different from the column name, you need to add an `expr` to specify the column name. The `name` of the measure is used when creating a metric.
-Measure names must be **unique** across all semantic models in a project.
+Measure names must be unique across all semantic models in a project and can not be the same as an existing `entity` or `dimension` within that same model.
+
### Description
diff --git a/website/docs/docs/cloud/connect-data-platform/about-connections.md b/website/docs/docs/cloud/connect-data-platform/about-connections.md
index 1329d179900..93bbf83584f 100644
--- a/website/docs/docs/cloud/connect-data-platform/about-connections.md
+++ b/website/docs/docs/cloud/connect-data-platform/about-connections.md
@@ -3,7 +3,7 @@ title: "About data platform connections"
id: about-connections
description: "Information about data platform connections"
sidebar_label: "About data platform connections"
-pagination_next: "docs/cloud/connect-data-platform/connect-starburst-trino"
+pagination_next: "docs/cloud/connect-data-platform/connect-microsoft-fabric"
pagination_prev: null
---
dbt Cloud can connect with a variety of data platform providers including:
@@ -11,6 +11,7 @@ dbt Cloud can connect with a variety of data platform providers including:
- [Apache Spark](/docs/cloud/connect-data-platform/connect-apache-spark)
- [Databricks](/docs/cloud/connect-data-platform/connect-databricks)
- [Google BigQuery](/docs/cloud/connect-data-platform/connect-bigquery)
+- [Microsoft Fabric](/docs/cloud/connect-data-platform/connect-microsoft-fabric)
- [PostgreSQL](/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb)
- [Snowflake](/docs/cloud/connect-data-platform/connect-snowflake)
- [Starburst or Trino](/docs/cloud/connect-data-platform/connect-starburst-trino)
diff --git a/website/docs/docs/cloud/connect-data-platform/connect-microsoft-fabric.md b/website/docs/docs/cloud/connect-data-platform/connect-microsoft-fabric.md
new file mode 100644
index 00000000000..e9d67524e89
--- /dev/null
+++ b/website/docs/docs/cloud/connect-data-platform/connect-microsoft-fabric.md
@@ -0,0 +1,43 @@
+---
+title: "Connect Microsoft Fabric"
+description: "Configure Microsoft Fabric connection."
+sidebar_label: "Connect Microsoft Fabric"
+---
+
+## Supported authentication methods
+The supported authentication methods are:
+- Azure Active Directory (Azure AD) service principal
+- Azure AD password
+
+SQL password (LDAP) is not supported in Microsoft Fabric Synapse Data Warehouse so you must use Azure AD. This means that to use [Microsoft Fabric](https://www.microsoft.com/en-us/microsoft-fabric) in dbt Cloud, you will need at least one Azure AD service principal to connect dbt Cloud to Fabric, ideally one service principal for each user.
+
+### Active Directory service principal
+The following are the required fields for setting up a connection with a Microsoft Fabric using Azure AD service principal authentication.
+
+| Field | Description |
+| --- | --- |
+| **Server** | The service principal's **host** value for the Fabric test endpoint. |
+| **Port** | The port to connect to Microsoft Fabric. You can use `1433` (the default), which is the standard SQL server port number. |
+| **Database** | The service principal's **database** value for the Fabric test endpoint. |
+| **Authentication** | Choose **Service Principal** from the dropdown. |
+| **Tenant ID** | The service principal's **Directory (tenant) ID**. |
+| **Client ID** | The service principal's **application (client) ID id**. |
+| **Client secret** | The service principal's **client secret** (not the **client secret id**). |
+
+
+### Active Directory password
+
+The following are the required fields for setting up a connection with a Microsoft Fabric using Azure AD password authentication.
+
+| Field | Description |
+| --- | --- |
+| **Server** | The server hostname to connect to Microsoft Fabric. |
+| **Port** | The server port. You can use `1433` (the default), which is the standard SQL server port number. |
+| **Database** | The database name. |
+| **Authentication** | Choose **Active Directory Password** from the dropdown. |
+| **User** | The AD username. |
+| **Password** | The AD username's password. |
+
+## Configuration
+
+To learn how to optimize performance with data platform-specific configurations in dbt Cloud, refer to [Microsoft Fabric DWH configurations](/reference/resource-configs/fabric-configs).
diff --git a/website/docs/docs/cloud/dbt-cloud-ide/dbt-cloud-ide.md b/website/docs/docs/cloud/dbt-cloud-ide/dbt-cloud-ide.md
deleted file mode 100644
index 3c41432bc62..00000000000
--- a/website/docs/docs/cloud/dbt-cloud-ide/dbt-cloud-ide.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: "dbt Cloud IDE"
-description: "Learn how to configure Git in dbt Cloud"
-pagination_next: "docs/cloud/dbt-cloud-ide/develop-in-the-cloud"
-pagination_prev: null
----
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/website/docs/docs/core/about-core-setup.md b/website/docs/docs/core/about-core-setup.md
index 2f6c077ba7d..8b170ba70d4 100644
--- a/website/docs/docs/core/about-core-setup.md
+++ b/website/docs/docs/core/about-core-setup.md
@@ -14,7 +14,7 @@ dbt Core is an [open-source](https://github.com/dbt-labs/dbt-core) tool that ena
- [Connecting to a data platform](/docs/core/connect-data-platform/profiles.yml)
- [How to run your dbt projects](/docs/running-a-dbt-project/run-your-dbt-projects)
-To learn about developing dbt projects in dbt Cloud or dbt Core, refer to [Develop dbt](/docs/cloud/about-develop-dbt).
-- **Note** — dbt Cloud provides a command line interface with the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation). Both the open-sourced dbt Core and the dbt Cloud CLI are command line tools that let you run dbt commands. The key distinction is the dbt Cloud CLI is tailored for dbt Cloud's infrastructure and integrates with all its [features](/docs/cloud/about-cloud/dbt-cloud-features).
+To learn about developing dbt projects in dbt Cloud, refer to [Develop with dbt Cloud](/docs/cloud/about-develop-dbt).
+ - dbt Cloud provides a command line interface with the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation). Both dbt Core and the dbt Cloud CLI are command line tools that let you run dbt commands. The key distinction is the dbt Cloud CLI is tailored for dbt Cloud's infrastructure and integrates with all its [features](/docs/cloud/about-cloud/dbt-cloud-features).
If you need a more detailed first-time setup guide for specific data platforms, read our [quickstart guides](https://docs.getdbt.com/guides).
diff --git a/website/docs/docs/core/connect-data-platform/about-core-connections.md b/website/docs/docs/core/connect-data-platform/about-core-connections.md
index 492e5ae878a..61a7805d232 100644
--- a/website/docs/docs/core/connect-data-platform/about-core-connections.md
+++ b/website/docs/docs/core/connect-data-platform/about-core-connections.md
@@ -14,6 +14,7 @@ dbt Core can connect with a variety of data platform providers including:
- [Apache Spark](/docs/core/connect-data-platform/spark-setup)
- [Databricks](/docs/core/connect-data-platform/databricks-setup)
- [Google BigQuery](/docs/core/connect-data-platform/bigquery-setup)
+- [Microsoft Fabric](/docs/core/connect-data-platform/fabric-setup)
- [PostgreSQL](/docs/core/connect-data-platform/postgres-setup)
- [Snowflake](/docs/core/connect-data-platform/snowflake-setup)
- [Starburst or Trino](/docs/core/connect-data-platform/trino-setup)
diff --git a/website/docs/docs/core/connect-data-platform/fabric-setup.md b/website/docs/docs/core/connect-data-platform/fabric-setup.md
index 11a8cf6f98b..deef1e04b22 100644
--- a/website/docs/docs/core/connect-data-platform/fabric-setup.md
+++ b/website/docs/docs/core/connect-data-platform/fabric-setup.md
@@ -8,7 +8,7 @@ meta:
github_repo: 'Microsoft/dbt-fabric'
pypi_package: 'dbt-fabric'
min_core_version: '1.4.0'
- cloud_support: Not Supported
+ cloud_support: Supported
platform_name: 'Microsoft Fabric'
config_page: '/reference/resource-configs/fabric-configs'
---
diff --git a/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/microsoft-fabric-support-rn.md b/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/microsoft-fabric-support-rn.md
index 9c4b81e9b94..b416817f3a0 100644
--- a/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/microsoft-fabric-support-rn.md
+++ b/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/microsoft-fabric-support-rn.md
@@ -9,7 +9,9 @@ date: 2023-11-28
Public Preview is now available in dbt Cloud for Microsoft Fabric!
-To learn more, check out the [Quickstart for dbt Cloud and Microsoft Fabric](/guides/microsoft-fabric?step=1). The guide walks you through:
+To learn more, refer to [Connect Microsoft Fabric](/docs/cloud/connect-data-platform/connect-microsoft-fabric) and [Microsoft Fabric DWH configurations](/reference/resource-configs/fabric-configs).
+
+Also, check out the [Quickstart for dbt Cloud and Microsoft Fabric](/guides/microsoft-fabric?step=1). The guide walks you through:
- Loading the Jaffle Shop sample data (provided by dbt Labs) into your Microsoft Fabric warehouse.
- Connecting dbt Cloud to Microsoft Fabric.
diff --git a/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/repo-caching.md b/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/repo-caching.md
new file mode 100644
index 00000000000..7c35991e961
--- /dev/null
+++ b/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/repo-caching.md
@@ -0,0 +1,14 @@
+---
+title: "New: Support for Git repository caching"
+description: "November 2023: dbt Cloud can cache your project's code (as well as other dbt packages) to ensure runs can begin despite an upstream Git provider's outage."
+sidebar_label: "New: Support for Git repository caching"
+sidebar_position: 07
+tags: [Nov-2023]
+date: 2023-11-29
+---
+
+Now available for dbt Cloud Enterprise plans is a new option to enable Git repository caching for your job runs. When enabled, dbt Cloud caches your dbt project's Git repository and uses the cached copy instead if there's an outage with the Git provider. This feature improves the reliability and stability of your job runs.
+
+To learn more, refer to [Repo caching](/docs/deploy/deploy-environments#git-repository-caching).
+
+
\ No newline at end of file
diff --git a/website/docs/guides/microsoft-fabric-qs.md b/website/docs/guides/microsoft-fabric-qs.md
index c7c53a2aac7..1d1e016a6f1 100644
--- a/website/docs/guides/microsoft-fabric-qs.md
+++ b/website/docs/guides/microsoft-fabric-qs.md
@@ -9,7 +9,7 @@ recently_updated: true
---
## Introduction
-In this quickstart guide, you'll learn how to use dbt Cloud with Microsoft Fabric. It will show you how to:
+In this quickstart guide, you'll learn how to use dbt Cloud with [Microsoft Fabric](https://www.microsoft.com/en-us/microsoft-fabric). It will show you how to:
- Load the Jaffle Shop sample data (provided by dbt Labs) into your Microsoft Fabric warehouse.
- Connect dbt Cloud to Microsoft Fabric.
@@ -27,7 +27,7 @@ A public preview of Microsoft Fabric in dbt Cloud is now available!
### Prerequisites
- You have a [dbt Cloud](https://www.getdbt.com/signup/) account.
- You have started the Microsoft Fabric (Preview) trial. For details, refer to [Microsoft Fabric (Preview) trial](https://learn.microsoft.com/en-us/fabric/get-started/fabric-trial) in the Microsoft docs.
-- As a Microsoft admin, you’ve enabled service principal authentication. For details, refer to [Enable service principal authentication](https://learn.microsoft.com/en-us/fabric/admin/metadata-scanning-enable-read-only-apis) in the Microsoft docs. dbt Cloud needs these authentication credentials to connect to Microsoft Fabric.
+- As a Microsoft admin, you’ve enabled service principal authentication. You must add the service principal to the Microsoft Fabric workspace with either a Member (recommended) or Admin permission set. For details, refer to [Enable service principal authentication](https://learn.microsoft.com/en-us/fabric/admin/metadata-scanning-enable-read-only-apis) in the Microsoft docs. dbt Cloud needs these authentication credentials to connect to Microsoft Fabric.
### Related content
- [dbt Courses](https://courses.getdbt.com/collections)
@@ -54,8 +54,8 @@ A public preview of Microsoft Fabric in dbt Cloud is now available!
CREATE TABLE dbo.customers
(
[ID] [int],
- [FIRST_NAME] [varchar] (8000),
- [LAST_NAME] [varchar] (8000)
+ \[FIRST_NAME] [varchar](8000),
+ \[LAST_NAME] [varchar](8000)
);
COPY INTO [dbo].[customers]
@@ -72,7 +72,7 @@ A public preview of Microsoft Fabric in dbt Cloud is now available!
[USER_ID] [int],
-- [ORDER_DATE] [int],
[ORDER_DATE] [date],
- [STATUS] [varchar] (8000)
+ \[STATUS] [varchar](8000)
);
COPY INTO [dbo].[orders]
@@ -87,8 +87,8 @@ A public preview of Microsoft Fabric in dbt Cloud is now available!
(
[ID] [int],
[ORDERID] [int],
- [PAYMENTMETHOD] [varchar] (8000),
- [STATUS] [varchar] (8000),
+ \[PAYMENTMETHOD] [varchar](8000),
+ \[STATUS] [varchar](8000),
[AMOUNT] [int],
[CREATED] [date]
);
@@ -108,6 +108,9 @@ A public preview of Microsoft Fabric in dbt Cloud is now available!
2. Enter a project name and click **Continue**.
3. Choose **Fabric** as your connection and click **Next**.
4. In the **Configure your environment** section, enter the **Settings** for your new project:
+ - **Server** — Use the service principal's **host** value for the Fabric test endpoint.
+ - **Port** — 1433 (which is the default).
+ - **Database** — Use the service principal's **database** value for the Fabric test endpoint.
5. Enter the **Development credentials** for your new project:
- **Authentication** — Choose **Service Principal** from the dropdown.
- **Tenant ID** — Use the service principal’s **Directory (tenant) id** as the value.
diff --git a/website/docs/reference/resource-configs/databricks-configs.md b/website/docs/reference/resource-configs/databricks-configs.md
index 65c6607cdcd..a3b00177967 100644
--- a/website/docs/reference/resource-configs/databricks-configs.md
+++ b/website/docs/reference/resource-configs/databricks-configs.md
@@ -100,6 +100,10 @@ insert into table analytics.databricks_incremental
### The `insert_overwrite` strategy
+:::caution
+This strategy is currently only compatible with All Purpose Clusters, not SQL Warehouses.
+:::
+
This strategy is most effective when specified alongside a `partition_by` clause in your model config. dbt will run an [atomic `insert overwrite` statement](https://spark.apache.org/docs/3.0.0-preview/sql-ref-syntax-dml-insert-overwrite-table.html) that dynamically replaces all partitions included in your query. Be sure to re-select _all_ of the relevant data for a partition when using this incremental strategy.
If no `partition_by` is specified, then the `insert_overwrite` strategy will atomically replace all contents of the table, overriding all existing data with only the new records. The column schema of the table remains the same, however. This can be desirable in some limited circumstances, since it minimizes downtime while the table contents are overwritten. The operation is comparable to running `truncate` + `insert` on other databases. For atomic replacement of Delta-formatted tables, use the `table` materialization (which runs `create or replace`) instead.
diff --git a/website/sidebars.js b/website/sidebars.js
index ea7c0c90814..473dfe85e04 100644
--- a/website/sidebars.js
+++ b/website/sidebars.js
@@ -54,6 +54,7 @@ const sidebarSettings = {
link: { type: "doc", id: "docs/cloud/connect-data-platform/about-connections" },
items: [
"docs/cloud/connect-data-platform/about-connections",
+ "docs/cloud/connect-data-platform/connect-microsoft-fabric",
"docs/cloud/connect-data-platform/connect-starburst-trino",
"docs/cloud/connect-data-platform/connect-snowflake",
"docs/cloud/connect-data-platform/connect-bigquery",
@@ -1059,6 +1060,7 @@ const sidebarSettings = {
"best-practices/materializations/materializations-guide-7-conclusion",
],
},
+ "best-practices/clone-incremental-models",
"best-practices/writing-custom-generic-tests",
"best-practices/best-practice-workflows",
"best-practices/dbt-unity-catalog-best-practices",
diff --git a/website/snippets/_adapters-verified.md b/website/snippets/_adapters-verified.md
index b9a71c67c36..c3607b50125 100644
--- a/website/snippets/_adapters-verified.md
+++ b/website/snippets/_adapters-verified.md
@@ -46,7 +46,7 @@
+
+:::note
+
+This feature is only available on the dbt Cloud Enterprise plan.
+
+:::
+
### Custom branch behavior
By default, all environments will use the default branch in your repository (usually the `main` branch) when accessing your dbt code. This is overridable within each dbt Cloud Environment using the **Default to a custom branch** option. This setting have will have slightly different behavior depending on the environment type:
diff --git a/website/static/img/best-practices/clone-command.png b/website/static/img/best-practices/clone-command.png
new file mode 100644
index 00000000000..96a558dd97c
Binary files /dev/null and b/website/static/img/best-practices/clone-command.png differ
diff --git a/website/static/img/best-practices/dag-example.png b/website/static/img/best-practices/dag-example.png
new file mode 100644
index 00000000000..247ede40afe
Binary files /dev/null and b/website/static/img/best-practices/dag-example.png differ
diff --git a/website/static/img/best-practices/slim-ci-job.png b/website/static/img/best-practices/slim-ci-job.png
new file mode 100644
index 00000000000..e6f3d926735
Binary files /dev/null and b/website/static/img/best-practices/slim-ci-job.png differ
diff --git a/website/static/img/docs/deploy/example-repo-caching.png b/website/static/img/docs/deploy/example-repo-caching.png
new file mode 100644
index 00000000000..805d845dccb
Binary files /dev/null and b/website/static/img/docs/deploy/example-repo-caching.png differ
diff --git a/website/vercel.json b/website/vercel.json
index 4749a5a8701..3377b49278d 100644
--- a/website/vercel.json
+++ b/website/vercel.json
@@ -2,6 +2,16 @@
"cleanUrls": true,
"trailingSlash": false,
"redirects": [
+ {
+ "source": "/docs/cloud/dbt-cloud-ide",
+ "destination": "/docs/cloud/dbt-cloud-ide/develop-in-the-cloud",
+ "permanent": true
+ },
+ {
+ "source": "/docs/core/installation",
+ "destination": "/docs/core/installation-overview",
+ "permanent": true
+ },
{
"source": "/docs/cloud/about-cloud-develop",
"destination": "/docs/cloud/about-develop-dbt",