-:::info Important
-
-If you have not already, you must add `config-version: 2` to your dbt_project.yml file.
-See **Upgrading to v0.17.latest from v0.16** below for more details.
-
-:::
diff --git a/website/docs/docs/deploy/ci-jobs.md b/website/docs/docs/deploy/ci-jobs.md
index 149a6951fdc..9b96bb4b766 100644
--- a/website/docs/docs/deploy/ci-jobs.md
+++ b/website/docs/docs/deploy/ci-jobs.md
@@ -11,12 +11,12 @@ You can set up [continuous integration](/docs/deploy/continuous-integration) (CI
dbt Labs recommends that you create your CI job in a dedicated dbt Cloud [deployment environment](/docs/deploy/deploy-environments#create-a-deployment-environment) that's connected to a staging database. Having a separate environment dedicated for CI will provide better isolation between your temporary CI schema builds and your production data builds. Additionally, sometimes teams need their CI jobs to be triggered when a PR is made to a branch other than main. If your team maintains a staging branch as part of your release process, having a separate environment will allow you to set a [custom branch](/faqs/environments/custom-branch-settings) and, accordingly, the CI job in that dedicated environment will be triggered only when PRs are made to the specified custom branch. To learn more, refer to [Get started with CI tests](/guides/set-up-ci).
+
### Prerequisites
-- You have a dbt Cloud account.
+- You have a dbt Cloud account.
- For the [Concurrent CI checks](/docs/deploy/continuous-integration#concurrent-ci-checks) and [Smart cancellation of stale builds](/docs/deploy/continuous-integration#smart-cancellation) features, your dbt Cloud account must be on the [Team or Enterprise plan](https://www.getdbt.com/pricing/).
-- You must be connected using dbt Cloud’s native Git integration with [GitHub](/docs/cloud/git/connect-github), [GitLab](/docs/cloud/git/connect-gitlab), or [Azure DevOps](/docs/cloud/git/connect-azure-devops).
- - With GitLab, you need a paid or self-hosted account which includes support for GitLab webhooks and [project access tokens](https://docs.gitlab.com/ee/user/project/settings/project_access_tokens.html). With GitLab Free, merge requests will invoke CI jobs but CI status updates (success or failure of the job) will not be reported back to GitLab.
- - If you previously configured your dbt project by providing a generic git URL that clones using SSH, you must reconfigure the project to connect through dbt Cloud's native integration.
+- Set up a [connection with your Git provider](/docs/cloud/git/git-configuration-in-dbt-cloud). This integration lets dbt Cloud run jobs on your behalf for job triggering.
+ - If you're using a native [GitLab](/docs/cloud/git/connect-gitlab) integration, you need a paid or self-hosted account that includes support for GitLab webhooks and [project access tokens](https://docs.gitlab.com/ee/user/project/settings/project_access_tokens.html). If you're using GitLab Free, merge requests will trigger CI jobs but CI job status updates (success or failure of the job) will not be reported back to GitLab.
To make CI job creation easier, many options on the **CI job** page are set to default values that dbt Labs recommends that you use. If you don't want to use the defaults, you can change them.
@@ -63,12 +63,13 @@ If you're not using dbt Cloud’s native Git integration with [GitHub](/docs/cl
1. Set up a CI job with the [Create Job](/dbt-cloud/api-v2#/operations/Create%20Job) API endpoint using `"job_type": ci` or from the [dbt Cloud UI](#set-up-ci-jobs).
-1. Call the [Trigger Job Run](/dbt-cloud/api-v2#/operations/Trigger%20Job%20Run) API endpoint to trigger the CI job. You must include these fields to the payload:
- - Provide the pull request (PR) ID with one of these fields, even if you're using a different Git provider (like Bitbucket). This can make your code less human-readable but it will _not_ affect dbt functionality.
+1. Call the [Trigger Job Run](/dbt-cloud/api-v2#/operations/Trigger%20Job%20Run) API endpoint to trigger the CI job. You must include both of these fields to the payload:
+ - Provide the pull request (PR) ID using one of these fields:
- `github_pull_request_id`
- `gitlab_merge_request_id`
- - `azure_devops_pull_request_id`
+ - `azure_devops_pull_request_id`
+ - `non_native_pull_request_id` (for example, BitBucket)
- Provide the `git_sha` or `git_branch` to target the correct commit or branch to run the job against.
## Example pull requests
@@ -110,22 +111,6 @@ If you're experiencing any issues, review some of the common questions and answe
-
Error messages that refer to schemas from previous PRs
diff --git a/website/docs/docs/deploy/continuous-integration.md b/website/docs/docs/deploy/continuous-integration.md
index 0f87965aada..22686c44bd2 100644
--- a/website/docs/docs/deploy/continuous-integration.md
+++ b/website/docs/docs/deploy/continuous-integration.md
@@ -50,3 +50,6 @@ When you push a new commit to a PR, dbt Cloud enqueues a new CI run for the late
+### Run slot treatment
+
+For accounts on the [Enterprise or Team](https://www.getdbt.com/pricing) plans, CI runs won't consume run slots. This guarantees a CI check will never block a production run.
\ No newline at end of file
diff --git a/website/docs/docs/deploy/dashboard-status-tiles.md b/website/docs/docs/deploy/dashboard-status-tiles.md
index 67aa1a93c33..d9e33fc32d6 100644
--- a/website/docs/docs/deploy/dashboard-status-tiles.md
+++ b/website/docs/docs/deploy/dashboard-status-tiles.md
@@ -56,7 +56,7 @@ Note that Mode has also built its own [integration](https://mode.com/get-dbt/) w
Looker does not allow you to directly embed HTML and instead requires creating a [custom visualization](https://docs.looker.com/admin-options/platform/visualizations). One way to do this for admins is to:
- Add a [new visualization](https://fishtown.looker.com/admin/visualizations) on the visualization page for Looker admins. You can use [this URL](https://metadata.cloud.getdbt.com/static/looker-viz.js) to configure a Looker visualization powered by the iFrame. It will look like this:
-
+
- Once you have set up your custom visualization, you can use it on any dashboard! You can configure it with the exposure name, jobID, and token relevant to that dashboard.
@@ -79,7 +79,7 @@ https://metadata.cloud.getdbt.com/exposure-tile?name=&jobId=
+
### Sigma
@@ -99,4 +99,4 @@ https://metadata.au.dbt.com/exposure-tile?name=&jobId=&to
```
:::
-
+
diff --git a/website/docs/docs/deploy/job-scheduler.md b/website/docs/docs/deploy/job-scheduler.md
index fba76f677a7..7a4cd740804 100644
--- a/website/docs/docs/deploy/job-scheduler.md
+++ b/website/docs/docs/deploy/job-scheduler.md
@@ -31,7 +31,7 @@ Familiarize yourself with these useful terms to help you understand how the job
| Over-scheduled job | A situation when a cron-scheduled job's run duration becomes longer than the frequency of the job’s schedule, resulting in a job queue that will grow faster than the scheduler can process the job’s runs. |
| Prep time | The time dbt Cloud takes to create a short-lived environment to execute the job commands in the user's cloud data platform. Prep time varies most significantly at the top of the hour when the dbt Cloud Scheduler experiences a lot of run traffic. |
| Run | A single, unique execution of a dbt job. |
-| Run slot | Run slots control the number of jobs that can run concurrently. Developer and Team plan accounts have a fixed number of run slots, and Enterprise users have [unlimited run slots](/docs/dbt-versions/release-notes/July-2023/faster-run#unlimited-job-concurrency-for-enterprise-accounts). Each running job occupies a run slot for the duration of the run. If you need more jobs to execute in parallel, consider the [Enterprise plan](https://www.getdbt.com/pricing/) |
+| Run slot | Run slots control the number of jobs that can run concurrently. Developer plans have a fixed number of run slots, while Enterprise and Team plans have [unlimited run slots](/docs/dbt-versions/release-notes/July-2023/faster-run#unlimited-job-concurrency-for-enterprise-accounts). Each running job occupies a run slot for the duration of the run.
Team and Developer plans are limited to one project each. For additional projects, consider upgrading to the [Enterprise plan](https://www.getdbt.com/pricing/).|
| Threads | When dbt builds a project's DAG, it tries to parallelize the execution by using threads. The [thread](/docs/running-a-dbt-project/using-threads) count is the maximum number of paths through the DAG that dbt can work on simultaneously. The default thread count in a job is 4. |
| Wait time | Amount of time that dbt Cloud waits before running a job, either because there are no available slots or because a previous run of the same job is still in progress. |
diff --git a/website/docs/docs/running-a-dbt-project/using-threads.md b/website/docs/docs/running-a-dbt-project/using-threads.md
index 5eede7abc27..af00dd9cc68 100644
--- a/website/docs/docs/running-a-dbt-project/using-threads.md
+++ b/website/docs/docs/running-a-dbt-project/using-threads.md
@@ -22,5 +22,5 @@ You will define the number of threads in your `profiles.yml` file (for dbt Core
## Related docs
-- [About profiles.yml](https://docs.getdbt.com/reference/profiles.yml)
+- [About profiles.yml](/docs/core/connect-data-platform/profiles.yml)
- [dbt Cloud job scheduler](/docs/deploy/job-scheduler)
diff --git a/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md b/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
index 665260ed9f4..11a610805a9 100644
--- a/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
+++ b/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
@@ -34,7 +34,7 @@ Use this guide to fully experience the power of the universal dbt Semantic Layer
- [Define metrics](#define-metrics) in dbt using MetricFlow
- [Test and query metrics](#test-and-query-metrics) with MetricFlow
- [Run a production job](#run-a-production-job) in dbt Cloud
-- [Set up dbt Semantic Layer](#setup) in dbt Cloud
+- [Set up dbt Semantic Layer](#set-up-dbt-semantic-layer) in dbt Cloud
- [Connect and query API](#connect-and-query-api) with dbt Cloud
MetricFlow allows you to define metrics in your dbt project and query them whether in dbt Cloud or dbt Core with [MetricFlow commands](/docs/build/metricflow-commands).
diff --git a/website/docs/faqs/API/rotate-token.md b/website/docs/faqs/API/rotate-token.md
index 144c834ea8a..4470de72d5a 100644
--- a/website/docs/faqs/API/rotate-token.md
+++ b/website/docs/faqs/API/rotate-token.md
@@ -36,7 +36,7 @@ curl --location --request POST 'https://YOUR_ACCESS_URL/api/v2/users/YOUR_USER_I
* Find your `YOUR_CURRENT_TOKEN` by going to **Profile Settings** -> **API Access** and copying the API key.
* Find [`YOUR_ACCESS_URL`](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan.
-:::info Example
+Example:
If `YOUR_USER_ID` = `123`, `YOUR_CURRENT_TOKEN` = `abcf9g`, and your `ACCESS_URL` = `cloud.getdbt.com`, then your curl request will be:
@@ -44,7 +44,7 @@ If `YOUR_USER_ID` = `123`, `YOUR_CURRENT_TOKEN` = `abcf9g`, and your `ACCESS_URL
curl --location --request POST 'https://cloud.getdbt.com/api/v2/users/123/apikey/' \
--header 'Authorization: Token abcf9g'
```
-:::
+
2. Find the new key in the API response or in dbt Cloud.
diff --git a/website/docs/faqs/Accounts/cloud-upgrade-instructions.md b/website/docs/faqs/Accounts/cloud-upgrade-instructions.md
index f8daf393f9b..d16651a944c 100644
--- a/website/docs/faqs/Accounts/cloud-upgrade-instructions.md
+++ b/website/docs/faqs/Accounts/cloud-upgrade-instructions.md
@@ -6,11 +6,13 @@ description: "Instructions for upgrading a dbt Cloud account after the trial end
dbt Cloud offers [several plans](https://www.getdbt.com/pricing/) with different features that meet your needs. This document is for dbt Cloud admins and explains how to select a plan in order to continue using dbt Cloud.
-:::tip Before you begin
-- You **_must_** be part of the [Owner](/docs/cloud/manage-access/self-service-permissions) user group to make billing changes. Users not included in this group will not see these options.
+## Prerequisites
+
+Before you begin:
+- You _must_ be part of the [Owner](/docs/cloud/manage-access/self-service-permissions) user group to make billing changes. Users not included in this group will not see these options.
- All amounts shown in dbt Cloud are in U.S. Dollars (USD)
- When your trial expires, your account's default plan enrollment will be a Team plan.
-:::
+
## Select a plan
diff --git a/website/docs/faqs/Git/git-migration.md b/website/docs/faqs/Git/git-migration.md
index 775ae3679e3..156227d59ae 100644
--- a/website/docs/faqs/Git/git-migration.md
+++ b/website/docs/faqs/Git/git-migration.md
@@ -16,7 +16,7 @@ To migrate from one git provider to another, refer to the following steps to avo
2. Go back to dbt Cloud and set up your [integration for the new git provider](/docs/cloud/git/connect-github), if needed.
3. Disconnect the old repository in dbt Cloud by going to **Account Settings** and then **Projects**. Click on the **Repository** link, then click **Edit** and **Disconnect**.
-
+
4. On the same page, connect to the new git provider repository by clicking **Configure Repository**
- If you're using the native integration, you may need to OAuth to it.
diff --git a/website/docs/faqs/Models/unique-model-names.md b/website/docs/faqs/Models/unique-model-names.md
index c721fca7c6e..7878a5a704c 100644
--- a/website/docs/faqs/Models/unique-model-names.md
+++ b/website/docs/faqs/Models/unique-model-names.md
@@ -10,7 +10,7 @@ id: unique-model-names
Within one project: yes! To build dependencies between models, you need to use the `ref` function, and pass in the model name as an argument. dbt uses that model name to uniquely resolve the `ref` to a specific model. As a result, these model names need to be unique, _even if they are in distinct folders_.
-A model in one project can have the same name as a model in another project (installed as a dependency). dbt uses the project name to uniquely identify each model. We call this "namespacing." If you `ref` a model with a duplicated name, it will resolve to the model within the same namespace (package or project), or raise an error because of an ambiguous reference. Use [two-argument `ref`](/reference/dbt-jinja-functions/ref#two-argument-variant) to disambiguate references by specifying the namespace.
+A model in one project can have the same name as a model in another project (installed as a dependency). dbt uses the project name to uniquely identify each model. We call this "namespacing." If you `ref` a model with a duplicated name, it will resolve to the model within the same namespace (package or project), or raise an error because of an ambiguous reference. Use [two-argument `ref`](/reference/dbt-jinja-functions/ref#ref-project-specific-models) to disambiguate references by specifying the namespace.
Those models will still need to land in distinct locations in the data warehouse. Read the docs on [custom aliases](/docs/build/custom-aliases) and [custom schemas](/docs/build/custom-schemas) for details on how to achieve this.
diff --git a/website/docs/guides/adapter-creation.md b/website/docs/guides/adapter-creation.md
index 8bf082b04a0..28e0e8253ad 100644
--- a/website/docs/guides/adapter-creation.md
+++ b/website/docs/guides/adapter-creation.md
@@ -566,12 +566,6 @@ It should be noted that both of these files are included in the bootstrapped out
## Test your adapter
-:::info
-
-Previously, we offered a packaged suite of tests for dbt adapter functionality: [`pytest-dbt-adapter`](https://github.com/dbt-labs/dbt-adapter-tests). We are deprecating that suite, in favor of the newer testing framework outlined in this document.
-
-:::
-
This document has two sections:
1. Refer to "About the testing framework" for a description of the standard framework that we maintain for using pytest together with dbt. It includes an example that shows the anatomy of a simple test case.
diff --git a/website/docs/guides/bigquery-qs.md b/website/docs/guides/bigquery-qs.md
index 9cf2447fa52..4f461a3cf3a 100644
--- a/website/docs/guides/bigquery-qs.md
+++ b/website/docs/guides/bigquery-qs.md
@@ -23,7 +23,6 @@ In this quickstart guide, you'll learn how to use dbt Cloud with BigQuery. It wi
:::tip Videos for you
You can check out [dbt Fundamentals](https://courses.getdbt.com/courses/fundamentals) for free if you're interested in course learning with videos.
-
:::
### Prerequisites
diff --git a/website/docs/guides/create-new-materializations.md b/website/docs/guides/create-new-materializations.md
index af2732c0c39..52a8594b0d2 100644
--- a/website/docs/guides/create-new-materializations.md
+++ b/website/docs/guides/create-new-materializations.md
@@ -13,7 +13,7 @@ recently_updated: true
## Introduction
-The model materializations you're familiar with, `table`, `view`, and `incremental` are implemented as macros in a package that's distributed along with dbt. You can check out the [source code for these materializations](https://github.com/dbt-labs/dbt-core/tree/main/core/dbt/include/global_project/macros/materializations). If you need to create your own materializations, reading these files is a good place to start. Continue reading below for a deep-dive into dbt materializations.
+The model materializations you're familiar with, `table`, `view`, and `incremental` are implemented as macros in a package that's distributed along with dbt. You can check out the [source code for these materializations](https://github.com/dbt-labs/dbt-core/tree/main/core/dbt/adapters/include/global_project/macros/materializations). If you need to create your own materializations, reading these files is a good place to start. Continue reading below for a deep-dive into dbt materializations.
:::caution
diff --git a/website/docs/guides/custom-cicd-pipelines.md b/website/docs/guides/custom-cicd-pipelines.md
index bd6d7617623..1778098f752 100644
--- a/website/docs/guides/custom-cicd-pipelines.md
+++ b/website/docs/guides/custom-cicd-pipelines.md
@@ -511,7 +511,7 @@ This section is only for those projects that connect to their git repository usi
:::
-The setup for this pipeline will use the same steps as the prior page. Before moving on, **follow steps 1-5 from the [prior page](https://docs.getdbt.com/guides/orchestration/custom-cicd-pipelines/3-dbt-cloud-job-on-merge)**
+The setup for this pipeline will use the same steps as the prior page. Before moving on, follow steps 1-5 from the [prior page](https://docs.getdbt.com/guides/custom-cicd-pipelines?step=2).
### 1. Create a pipeline job that runs when PRs are created
diff --git a/website/docs/guides/databricks-qs.md b/website/docs/guides/databricks-qs.md
index 5a0c5536e7f..cb01daec394 100644
--- a/website/docs/guides/databricks-qs.md
+++ b/website/docs/guides/databricks-qs.md
@@ -21,7 +21,6 @@ In this quickstart guide, you'll learn how to use dbt Cloud with Databricks. It
:::tip Videos for you
You can check out [dbt Fundamentals](https://courses.getdbt.com/courses/fundamentals) for free if you're interested in course learning with videos.
-
:::
### Prerequisites
diff --git a/website/docs/guides/debug-schema-names.md b/website/docs/guides/debug-schema-names.md
index c7bf1a195b1..24b7984adf5 100644
--- a/website/docs/guides/debug-schema-names.md
+++ b/website/docs/guides/debug-schema-names.md
@@ -14,11 +14,8 @@ recently_updated: true
## Introduction
-If a model uses the [`schema` config](/reference/resource-properties/schema) but builds under an unexpected schema, here are some steps for debugging the issue.
+If a model uses the [`schema` config](/reference/resource-properties/schema) but builds under an unexpected schema, here are some steps for debugging the issue. The full explanation on custom schemas can be found [here](/docs/build/custom-schemas).
-:::info
-The full explanation on custom schemas can be found [here](/docs/build/custom-schemas).
-:::
You can also follow along via this video:
@@ -94,9 +91,7 @@ Now, re-read through the logic of your `generate_schema_name` macro, and mentall
You should find that the schema dbt is constructing for your model matches the output of your `generate_schema_name` macro.
-:::info
-Note that snapshots do not follow this behavior, check out the docs on [target_schema](/reference/resource-configs/target_schema) instead.
-:::
+Be careful. Snapshots do not follow this behavior, check out the docs on [target_schema](/reference/resource-configs/target_schema) instead.
## Adjust as necessary
diff --git a/website/docs/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs.md b/website/docs/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs.md
index cb3a6804247..a2967ccbe15 100644
--- a/website/docs/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs.md
+++ b/website/docs/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs.md
@@ -128,15 +128,14 @@ if __name__ == '__main__':
4. Replace **``** and **``** with the correct values of your environment and [Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan.
-:::tip
- To find these values, navigate to **dbt Cloud**, select **Deploy -> Jobs**. Select the Job you want to run and copy the URL. For example: `https://cloud.getdbt.com/deploy/000000/projects/111111/jobs/222222`
-and therefore valid code would be:
+ * To find these values, navigate to **dbt Cloud**, select **Deploy -> Jobs**. Select the Job you want to run and copy the URL. For example: `https://cloud.getdbt.com/deploy/000000/projects/111111/jobs/222222`
+ and therefore valid code would be:
- # Your URL is structured https:///deploy//projects//jobs/
+Your URL is structured `https:///deploy//projects//jobs/`
account_id = 000000
job_id = 222222
base_url = "cloud.getdbt.com"
-:::
+
5. Run the Notebook. It will fail, but you should see **a `job_id` widget** at the top of your notebook.
@@ -161,9 +160,7 @@ DbtJobRunStatus.RUNNING
DbtJobRunStatus.SUCCESS
```
-:::note
You can cancel the job from dbt Cloud if necessary.
-:::
## Configure the workflows to run the dbt Cloud jobs
diff --git a/website/docs/guides/manual-install-qs.md b/website/docs/guides/manual-install-qs.md
index e9c1af259ac..fcd1e5e9599 100644
--- a/website/docs/guides/manual-install-qs.md
+++ b/website/docs/guides/manual-install-qs.md
@@ -70,7 +70,7 @@ $ pwd
-6. Update the following values in the `dbt_project.yml` file:
+6. dbt provides the following values in the `dbt_project.yml` file:
@@ -92,7 +92,7 @@ models:
## Connect to BigQuery
-When developing locally, dbt connects to your using a [profile](/docs/core/connect-data-platform/connection-profiles), which is a YAML file with all the connection details to your warehouse.
+When developing locally, dbt connects to your using a [profile](/docs/core/connect-data-platform/connection-profiles), which is a YAML file with all the connection details to your warehouse.
1. Create a file in the `~/.dbt/` directory named `profiles.yml`.
2. Move your BigQuery keyfile into this directory.
diff --git a/website/docs/guides/redshift-qs.md b/website/docs/guides/redshift-qs.md
index 890be27e50a..c81a4d247a5 100644
--- a/website/docs/guides/redshift-qs.md
+++ b/website/docs/guides/redshift-qs.md
@@ -18,10 +18,8 @@ In this quickstart guide, you'll learn how to use dbt Cloud with Redshift. It wi
- Document your models
- Schedule a job to run
-
-:::tip Videos for you
+:::tips Videos for you
You can check out [dbt Fundamentals](https://courses.getdbt.com/courses/fundamentals) for free if you're interested in course learning with videos.
-
:::
### Prerequisites
diff --git a/website/docs/guides/sl-migration.md b/website/docs/guides/sl-migration.md
index 8ede40a6a2d..afa181646e3 100644
--- a/website/docs/guides/sl-migration.md
+++ b/website/docs/guides/sl-migration.md
@@ -25,21 +25,26 @@ dbt Labs recommends completing these steps in a local dev environment (such as t
1. Create new Semantic Model configs as YAML files in your dbt project.*
1. Upgrade the metrics configs in your project to the new spec.*
1. Delete your old metrics file or remove the `.yml` file extension so they're ignored at parse time. Remove the `dbt-metrics` package from your project. Remove any macros that reference `dbt-metrics`, like `metrics.calculate()`. Make sure that any packages you’re using don't have references to the old metrics spec.
-1. Install the CLI with `python -m pip install "dbt-metricflow[your_adapter_name]"`. For example:
+1. Install the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation) to run MetricFlow commands and define your semantic model configurations.
+ - If you're using dbt Core, install the [MetricFlow CLI](/docs/build/metricflow-commands) with `python -m pip install "dbt-metricflow[your_adapter_name]"`. For example:
```bash
python -m pip install "dbt-metricflow[snowflake]"
```
- **Note** - The MetricFlow CLI is not available in the IDE at this time. Support is coming soon.
+ **Note** - MetricFlow commands aren't yet supported in the dbt CLoud IDE at this time.
-1. Run `dbt parse`. This parses your project and creates a `semantic_manifest.json` file in your target directory. MetricFlow needs this file to query metrics. If you make changes to your configs, you will need to parse your project again.
-1. Run `mf list metrics` to view the metrics in your project.
-1. Test querying a metric by running `mf query --metrics --group-by `. For example:
+2. Run `dbt parse`. This parses your project and creates a `semantic_manifest.json` file in your target directory. MetricFlow needs this file to query metrics. If you make changes to your configs, you will need to parse your project again.
+3. Run `mf list metrics` to view the metrics in your project.
+4. Test querying a metric by running `mf query --metrics --group-by `. For example:
```bash
mf query --metrics revenue --group-by metric_time
```
-1. Run `mf validate-configs` to run semantic and warehouse validations. This ensures your configs are valid and the underlying objects exist in your warehouse.
-1. Push these changes to a new branch in your repo.
+5. Run `mf validate-configs` to run semantic and warehouse validations. This ensures your configs are valid and the underlying objects exist in your warehouse.
+6. Push these changes to a new branch in your repo.
+
+:::info `ref` not supported
+The dbt Semantic Layer API doesn't support `ref` to call dbt objects. This is currently due to differences in architecture between the legacy Semantic Layer and the re-released Semantic Layer. Instead, use the complete qualified table name. If you're using dbt macros at query time to calculate your metrics, you should move those calculations into your Semantic Layer metric definitions as code.
+:::
**To make this process easier, dbt Labs provides a [custom migration tool](https://github.com/dbt-labs/dbt-converter) that automates these steps for you. You can find installation instructions in the [README](https://github.com/dbt-labs/dbt-converter/blob/master/README.md). Derived metrics aren’t supported in the migration tool, and will have to be migrated manually.*
diff --git a/website/docs/guides/sl-partner-integration-guide.md b/website/docs/guides/sl-partner-integration-guide.md
index 61d558f504d..7eb158a2c85 100644
--- a/website/docs/guides/sl-partner-integration-guide.md
+++ b/website/docs/guides/sl-partner-integration-guide.md
@@ -15,10 +15,7 @@ recently_updated: true
To fit your tool within the world of the Semantic Layer, dbt Labs offers some best practice recommendations for how to expose metrics and allow users to interact with them seamlessly.
-:::note
This is an evolving guide that is meant to provide recommendations based on our experience. If you have any feedback, we'd love to hear it!
-:::
-
### Prerequisites
diff --git a/website/docs/guides/snowflake-qs.md b/website/docs/guides/snowflake-qs.md
index 5b4f9e3e2be..0401c37871f 100644
--- a/website/docs/guides/snowflake-qs.md
+++ b/website/docs/guides/snowflake-qs.md
@@ -26,7 +26,7 @@ You can check out [dbt Fundamentals](https://courses.getdbt.com/courses/fundamen
You can also watch the [YouTube video on dbt and Snowflake](https://www.youtube.com/watch?v=kbCkwhySV_I&list=PL0QYlrC86xQm7CoOH6RS7hcgLnd3OQioG).
:::
-
+
### Prerequisites
- You have a [dbt Cloud account](https://www.getdbt.com/signup/).
diff --git a/website/docs/reference/analysis-properties.md b/website/docs/reference/analysis-properties.md
index 880aeddbb0d..1601c817830 100644
--- a/website/docs/reference/analysis-properties.md
+++ b/website/docs/reference/analysis-properties.md
@@ -18,6 +18,7 @@ analyses:
[description](/reference/resource-properties/description):
[docs](/reference/resource-configs/docs):
show: true | false
+ node_color: # Use name (such as node_color: purple) or hex code with quotes (such as node_color: "#cd7f32")
config:
[tags](/reference/resource-configs/tags): | []
columns:
diff --git a/website/docs/reference/commands/debug.md b/website/docs/reference/commands/debug.md
index 4ae5a1d2dd9..e1865ff1b67 100644
--- a/website/docs/reference/commands/debug.md
+++ b/website/docs/reference/commands/debug.md
@@ -7,7 +7,7 @@ id: "debug"
`dbt debug` is a utility function to test the database connection and display information for debugging purposes, such as the validity of your project file and your installation of any requisite dependencies (like `git` when you run `dbt deps`).
-*Note: Not to be confused with [debug-level logging](/reference/global-configs/about-global-configs#debug-level-logging) via the `--debug` option which increases verbosity.
+*Note: Not to be confused with [debug-level logging](/reference/global-configs/logs#debug-level-logging) via the `--debug` option which increases verbosity.
### Example usage
diff --git a/website/docs/reference/dbt-jinja-functions/as_text.md b/website/docs/reference/dbt-jinja-functions/as_text.md
deleted file mode 100644
index 6b26cfa327d..00000000000
--- a/website/docs/reference/dbt-jinja-functions/as_text.md
+++ /dev/null
@@ -1,58 +0,0 @@
----
-title: "About as_text filter"
-sidebar_label: "as_text"
-id: "as_text"
-description: "Use this filter to convert Jinja-compiled output back to text."
----
-
-The `as_text` Jinja filter will coerce Jinja-compiled output back to text. It
-can be used in YAML rendering contexts where values _must_ be provided as
-strings, rather than as the datatype that they look like.
-
-:::info Heads up
-In dbt v0.17.1, native rendering is not enabled by default. As such,
-the `as_text` filter has no functional effect.
-
-It is still possible to natively render specific values using the [`as_bool`](/reference/dbt-jinja-functions/as_bool),
-[`as_number`](/reference/dbt-jinja-functions/as_number), and [`as_native`](/reference/dbt-jinja-functions/as_native) filters.
-
-:::
-
-### Usage
-
-In the example below, the `as_text` filter is used to assert that `''` is an
-empty string. In a native rendering, `''` would be coerced to the Python
-keyword `None`. This specification is necessary in `v0.17.0`, but it is not
-useful or necessary in later versions of dbt.
-
-
-
-```yml
-models:
- - name: orders
- columns:
- - name: order_status
- tests:
- - accepted_values:
- values: ['pending', 'shipped', "{{ '' | as_text }}"]
-
-```
-
-
-
-As of `v0.17.1`, native rendering does not occur by default, and the `as_text`
-specification is superfluous.
-
-
-
-```yml
-models:
- - name: orders
- columns:
- - name: order_status
- tests:
- - accepted_values:
- values: ['pending', 'shipped', '']
-```
-
-
diff --git a/website/docs/reference/dbt-jinja-functions/builtins.md b/website/docs/reference/dbt-jinja-functions/builtins.md
index edc5f34ffda..7d970b9d5e1 100644
--- a/website/docs/reference/dbt-jinja-functions/builtins.md
+++ b/website/docs/reference/dbt-jinja-functions/builtins.md
@@ -42,9 +42,9 @@ From dbt v1.5 and higher, use the following macro to extract user-provided argum
-- call builtins.ref based on provided positional arguments
{% set rel = None %}
{% if packagename is not none %}
- {% set rel = return(builtins.ref(packagename, modelname, version=version)) %}
+ {% set rel = builtins.ref(packagename, modelname, version=version) %}
{% else %}
- {% set rel = return(builtins.ref(modelname, version=version)) %}
+ {% set rel = builtins.ref(modelname, version=version) %}
{% endif %}
-- finally, override the database name with "dev"
diff --git a/website/docs/reference/dbt-jinja-functions/cross-database-macros.md b/website/docs/reference/dbt-jinja-functions/cross-database-macros.md
index 4df8275d4bd..334bcfe5760 100644
--- a/website/docs/reference/dbt-jinja-functions/cross-database-macros.md
+++ b/website/docs/reference/dbt-jinja-functions/cross-database-macros.md
@@ -30,6 +30,7 @@ Please make sure to take a look at the [SQL expressions section](#sql-expression
- [type\_numeric](#type_numeric)
- [type\_string](#type_string)
- [type\_timestamp](#type_timestamp)
+ - [current\_timestamp](#current_timestamp)
- [Set functions](#set-functions)
- [except](#except)
- [intersect](#intersect)
@@ -76,6 +77,7 @@ Please make sure to take a look at the [SQL expressions section](#sql-expression
- [type\_numeric](#type_numeric)
- [type\_string](#type_string)
- [type\_timestamp](#type_timestamp)
+ - [current\_timestamp](#current_timestamp)
- [Set functions](#set-functions)
- [except](#except)
- [intersect](#intersect)
@@ -316,6 +318,29 @@ This macro yields the database-specific data type for a `TIMESTAMP` (which may o
TIMESTAMP
```
+### current_timestamp
+
+This macro returns the current date and time for the system. Depending on the adapter:
+
+- The result may be an aware or naive timestamp.
+- The result may correspond to the start of the statement or the start of the transaction.
+
+
+**Args**
+- None
+
+**Usage**
+- You can use the `current_timestamp()` macro within your dbt SQL files like this:
+
+```sql
+{{ dbt.current_timestamp() }}
+```
+**Sample output (PostgreSQL)**
+
+```sql
+now()
+```
+
## Set functions
### except
diff --git a/website/docs/reference/dbt-jinja-functions/debug-method.md b/website/docs/reference/dbt-jinja-functions/debug-method.md
index 0938970b50c..778ad095693 100644
--- a/website/docs/reference/dbt-jinja-functions/debug-method.md
+++ b/website/docs/reference/dbt-jinja-functions/debug-method.md
@@ -6,9 +6,9 @@ description: "The `{{ debug() }}` macro will open an iPython debugger."
---
-:::caution New in v0.14.1
+:::warning Development environment only
-The `debug` macro is new in dbt v0.14.1, and is only intended to be used in a development context with dbt. Do not deploy code to production which uses the `debug` macro.
+The `debug` macro is only intended to be used in a development context with dbt. Do not deploy code to production that uses the `debug` macro.
:::
diff --git a/website/docs/reference/dbt-jinja-functions/env_var.md b/website/docs/reference/dbt-jinja-functions/env_var.md
index f4cc05cec0f..a8f2a94fbd2 100644
--- a/website/docs/reference/dbt-jinja-functions/env_var.md
+++ b/website/docs/reference/dbt-jinja-functions/env_var.md
@@ -100,6 +100,7 @@ select 1 as id
-:::info dbt Cloud Usage
+### dbt Cloud usage
+
If you are using dbt Cloud, you must adhere to the naming conventions for environment variables. Environment variables in dbt Cloud must be prefixed with `DBT_` (including `DBT_ENV_CUSTOM_ENV_` or `DBT_ENV_SECRET_`). Environment variables keys are uppercased and case sensitive. When referencing `{{env_var('DBT_KEY')}}` in your project's code, the key must match exactly the variable defined in dbt Cloud's UI.
-:::
+
diff --git a/website/docs/reference/dbt-jinja-functions/ref.md b/website/docs/reference/dbt-jinja-functions/ref.md
index fda5992e234..bc1f3f1ba9e 100644
--- a/website/docs/reference/dbt-jinja-functions/ref.md
+++ b/website/docs/reference/dbt-jinja-functions/ref.md
@@ -3,6 +3,7 @@ title: "About ref function"
sidebar_label: "ref"
id: "ref"
description: "Read this guide to understand the builtins Jinja function in dbt."
+keyword: dbt mesh, project dependencies, ref, cross project ref, project dependencies
---
The most important function in dbt is `ref()`; it's impossible to build even moderately complex models without it. `ref()` is how you reference one model within another. This is a very common behavior, as typically models are built to be "stacked" on top of one another. Here is how this looks in practice:
@@ -68,15 +69,19 @@ select * from {{ ref('model_name', version=1) }}
select * from {{ ref('model_name') }}
```
-### Two-argument variant
+### Ref project-specific models
-You can also use a two-argument variant of the `ref` function. With this variant, you can pass both a namespace (project or package) and model name to `ref` to avoid ambiguity. When using two arguments with projects (not packages), you also need to set [cross project dependencies](/docs/collaborate/govern/project-dependencies).
+You can also reference models from different projects using the two-argument variant of the `ref` function. By specifying both a namespace (which could be a project or package) and a model name, you ensure clarity and avoid any ambiguity in the `ref`. This is also useful when dealing with models across various projects or packages.
+
+When using two arguments with projects (not packages), you also need to set [cross project dependencies](/docs/collaborate/govern/project-dependencies).
+
+The following syntax demonstrates how to reference a model from a specific project or package:
```sql
select * from {{ ref('project_or_package', 'model_name') }}
```
-We recommend using two-argument `ref` any time you are referencing a model defined in a different package or project. While not required in all cases, it's more explicit for you, for dbt, and for future readers of your code.
+We recommend using two-argument `ref` any time you are referencing a model defined in a different package or project. While not required in all cases, it's more explicit for you, for dbt, and future readers of your code.
diff --git a/website/docs/reference/dbt-jinja-functions/target.md b/website/docs/reference/dbt-jinja-functions/target.md
index e7d08db592f..968f64d0f8d 100644
--- a/website/docs/reference/dbt-jinja-functions/target.md
+++ b/website/docs/reference/dbt-jinja-functions/target.md
@@ -1,20 +1,18 @@
---
-title: "About target variable"
+title: "About target variables"
sidebar_label: "target"
id: "target"
-description: "Contains information about your connection to the warehouse."
+description: "The `target` variable contains information about your connection to the warehouse."
---
-`target` contains information about your connection to the warehouse.
+The `target` variable contains information about your connection to the warehouse.
-* **dbt Core:** These values are based on the target defined in your [`profiles.yml` file](/docs/core/connect-data-platform/profiles.yml)
-* **dbt Cloud Scheduler:**
- * `target.name` is defined per job as described [here](/docs/build/custom-target-names).
- * For all other attributes, the values are defined by the deployment connection. To check these values, click **Deploy** from the upper left and select **Environments**. Then, select the relevant deployment environment, and click **Settings**.
-* **dbt Cloud IDE:** The values are defined by your connection and credentials. To check any of these values, head to your account (via your profile image in the top right hand corner), and select the project under "Credentials".
+- **dbt Core:** These values are based on the target defined in your [profiles.yml](/docs/core/connect-data-platform/profiles.yml) file. Please note that for certain adapters, additional configuration steps may be required. Refer to the [set up page](/docs/core/connect-data-platform/about-core-connections) for your data platform.
+- **dbt Cloud** To learn more about setting up your adapter in dbt Cloud, refer to [About data platform connections](/docs/cloud/connect-data-platform/about-connections).
+ - **[dbt Cloud Scheduler](/docs/deploy/job-scheduler)**: `target.name` is defined per job as described in [Custom target names](/docs/build/custom-target-names). For other attributes, values are defined by the deployment connection. To check these values, click **Deploy** and select **Environments**. Then, select the relevant deployment environment, and click **Settings**.
+ - **[dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud)**: These values are defined by your connection and credentials. To edit these values, click the gear icon in the top right, select **Profile settings**, and click **Credentials**. Select and edit a project to set up the credentials and target name.
-
-Some configs are shared between all adapters, while others are adapter-specific.
+Some configurations are shared between all adapters, while others are adapter-specific.
## Common
| Variable | Example | Description |
@@ -54,6 +52,7 @@ Some configs are shared between all adapters, while others are adapter-specific.
| `target.dataset` | dbt_alice | The dataset the active profile |
## Examples
+
### Use `target.name` to limit data in dev
As long as you use sensible target names, you can perform conditional logic to limit data when working in dev.
@@ -68,6 +67,7 @@ where created_at >= dateadd('day', -3, current_date)
```
### Use `target.name` to change your source database
+
If you have specific Snowflake databases configured for your dev/qa/prod environments,
you can set up your sources to compile to different databases depending on your
environment.
diff --git a/website/docs/reference/dbt_project.yml.md b/website/docs/reference/dbt_project.yml.md
index a5ad601f78b..ae911200b40 100644
--- a/website/docs/reference/dbt_project.yml.md
+++ b/website/docs/reference/dbt_project.yml.md
@@ -1,6 +1,8 @@
Every [dbt project](/docs/build/projects) needs a `dbt_project.yml` file — this is how dbt knows a directory is a dbt project. It also contains important information that tells dbt how to operate your project.
+dbt uses [YAML](https://yaml.org/) in a few different places. If you're new to YAML, it would be worth learning how arrays, dictionaries, and strings are represented.
+
By default, dbt will look for `dbt_project.yml` in your current working directory and its parents, but you can set a different directory using the `--project-dir` flag.
@@ -15,11 +17,6 @@ Starting from dbt v1.5 and higher, you can specify your dbt Cloud project ID in
-:::info YAML syntax
-dbt uses YAML in a few different places. If you're new to YAML, it would be worth taking the time to learn how arrays, dictionaries, and strings are represented.
-:::
-
-
Something to note, you can't set up a "property" in the `dbt_project.yml` file if it's not a config (an example is [macros](/reference/macro-properties)). This applies to all types of resources. Refer to [Configs and properties](/reference/configs-and-properties) for more detail.
The following example is a list of all available configurations in the `dbt_project.yml` file:
diff --git a/website/docs/reference/global-configs/print-output.md b/website/docs/reference/global-configs/print-output.md
index 112b92b546f..78de635f2dd 100644
--- a/website/docs/reference/global-configs/print-output.md
+++ b/website/docs/reference/global-configs/print-output.md
@@ -8,35 +8,17 @@ sidebar: "Print output"
-By default, dbt includes `print()` messages in standard out (stdout). You can use the `NO_PRINT` config to prevent these messages from showing up in stdout.
-
-
-
-```yaml
-config:
- no_print: true
-```
-
-
+By default, dbt includes `print()` messages in standard out (stdout). You can use the `DBT_NO_PRINT` environment variable to prevent these messages from showing up in stdout.
-By default, dbt includes `print()` messages in standard out (stdout). You can use the `PRINT` config to prevent these messages from showing up in stdout.
-
-
-
-```yaml
-config:
- print: false
-```
-
-
+By default, dbt includes `print()` messages in standard out (stdout). You can use the `DBT_PRINT` environment variable to prevent these messages from showing up in stdout.
:::warning Syntax deprecation
-The original `NO_PRINT` syntax has been deprecated, starting with dbt v1.5. Backward compatibility is supported but will be removed in an as-of-yet-undetermined future release.
+The original `DBT_NO_PRINT` environment variable has been deprecated, starting with dbt v1.5. Backward compatibility is supported but will be removed in an as-of-yet-undetermined future release.
:::
@@ -46,8 +28,6 @@ Supply `--no-print` flag to `dbt run` to suppress `print()` messages from showin
```text
dbt --no-print run
-...
-
```
### Printer width
diff --git a/website/docs/reference/global-configs/usage-stats.md b/website/docs/reference/global-configs/usage-stats.md
index 1f9492f4a43..01465bcac2a 100644
--- a/website/docs/reference/global-configs/usage-stats.md
+++ b/website/docs/reference/global-configs/usage-stats.md
@@ -18,4 +18,3 @@ config:
dbt Core users can also use the DO_NOT_TRACK environment variable to enable or disable sending anonymous data. For more information, see [Environment variables](/docs/build/environment-variables).
`DO_NOT_TRACK=1` is the same as `DBT_SEND_ANONYMOUS_USAGE_STATS=False`
-`DO_NOT_TRACK=0` is the same as `DBT_SEND_ANONYMOUS_USAGE_STATS=True`
diff --git a/website/docs/reference/model-properties.md b/website/docs/reference/model-properties.md
index 65f9307b5b3..46fb0ca3bad 100644
--- a/website/docs/reference/model-properties.md
+++ b/website/docs/reference/model-properties.md
@@ -16,6 +16,7 @@ models:
[description](/reference/resource-properties/description):
[docs](/reference/resource-configs/docs):
show: true | false
+ node_color: # Use name (such as node_color: purple) or hex code with quotes (such as node_color: "#cd7f32")
[latest_version](/reference/resource-properties/latest_version):
[deprecation_date](/reference/resource-properties/deprecation_date):
[access](/reference/resource-configs/access): private | protected | public
diff --git a/website/docs/reference/node-selection/methods.md b/website/docs/reference/node-selection/methods.md
index 61fd380e11b..549bc5d45e1 100644
--- a/website/docs/reference/node-selection/methods.md
+++ b/website/docs/reference/node-selection/methods.md
@@ -8,9 +8,6 @@ you can omit it (the default value will be one of `path`, `file` or `fqn`).
-:::info New functionality
-New in v1.5!
-:::
Many of the methods below support Unix-style wildcards:
diff --git a/website/docs/reference/node-selection/syntax.md b/website/docs/reference/node-selection/syntax.md
index 22946903b7d..61b53ea5ebd 100644
--- a/website/docs/reference/node-selection/syntax.md
+++ b/website/docs/reference/node-selection/syntax.md
@@ -158,7 +158,6 @@ If both the flag and env var are provided, the flag takes precedence.
#### Notes:
- The `--state` artifacts must be of schema versions that are compatible with the currently running dbt version.
-- The path to state artifacts can be set via the `--state` flag or `DBT_ARTIFACT_STATE_PATH` environment variable. If both the flag and env var are provided, the flag takes precedence.
- These are powerful, complex features. Read about [known caveats and limitations](/reference/node-selection/state-comparison-caveats) to state comparison.
### The "result" status
@@ -174,7 +173,7 @@ The following dbt commands produce `run_results.json` artifacts whose results ca
After issuing one of the above commands, you can reference the results by adding a selector to a subsequent command as follows:
```bash
-# You can also set the DBT_ARTIFACT_STATE_PATH environment variable instead of the --state flag.
+# You can also set the DBT_STATE environment variable instead of the --state flag.
dbt run --select "result: --defer --state path/to/prod/artifacts"
```
diff --git a/website/docs/reference/parsing.md b/website/docs/reference/parsing.md
index 1a68ba0d476..6eed4c96af0 100644
--- a/website/docs/reference/parsing.md
+++ b/website/docs/reference/parsing.md
@@ -41,7 +41,7 @@ The [`PARTIAL_PARSE` global config](/reference/global-configs/parsing) can be en
Parse-time attributes (dependencies, configs, and resource properties) are resolved using the parse-time context. When partial parsing is enabled, and certain context variables change, those attributes will _not_ be re-resolved, and are likely to become stale.
-In particular, you may see **incorrect results** if these attributes depend on "volatile" context variables, such as [`run_started_at`](/reference/dbt-jinja-functions/run_started_at), [`invocation_id`](/reference/dbt-jinja-functions/invocation_id), or [flags](/reference/dbt-jinja-functions/flags). These variables are likely (or even guaranteed!) to change in each invocation. We _highly discourage_ you from using these variables to set parse-time attributes (dependencies, configs, and resource properties).
+In particular, you may see incorrect results if these attributes depend on "volatile" context variables, such as [`run_started_at`](/reference/dbt-jinja-functions/run_started_at), [`invocation_id`](/reference/dbt-jinja-functions/invocation_id), or [flags](/reference/dbt-jinja-functions/flags). These variables are likely (or even guaranteed!) to change in each invocation. dbt Labs _strongly discourages_ you from using these variables to set parse-time attributes (dependencies, configs, and resource properties).
Starting in v1.0, dbt _will_ detect changes in environment variables. It will selectively re-parse only the files that depend on that [`env_var`](/reference/dbt-jinja-functions/env_var) value. (If the env var is used in `profiles.yml` or `dbt_project.yml`, a full re-parse is needed.) However, dbt will _not_ re-render **descriptions** that include env vars. If your descriptions include frequently changing env vars (this is highly uncommon), we recommend that you fully re-parse when generating documentation: `dbt --no-partial-parse docs generate`.
@@ -51,7 +51,9 @@ If certain inputs change between runs, dbt will trigger a full re-parse. The res
- `dbt_project.yml` content (or `env_var` values used within)
- installed packages
- dbt version
-- certain widely-used macros, e.g. [builtins](/reference/dbt-jinja-functions/builtins) overrides or `generate_x_name` for `database`/`schema`/`alias`
+- certain widely-used macros (for example, [builtins](/reference/dbt-jinja-functions/builtins), overrides, or `generate_x_name` for `database`/`schema`/`alias`)
+
+If you're triggering [CI](/docs/deploy/continuous-integration) job runs, the benefits of partial parsing are not applicable to new pull requests (PR) or new branches. However, they are applied on subsequent commits to the new PR or branch.
If you ever get into a bad state, you can disable partial parsing and trigger a full re-parse by setting the `PARTIAL_PARSE` global config to false, or by deleting `target/partial_parse.msgpack` (e.g. by running `dbt clean`).
diff --git a/website/docs/reference/project-configs/clean-targets.md b/website/docs/reference/project-configs/clean-targets.md
index 9b464840723..8ca4065ed75 100644
--- a/website/docs/reference/project-configs/clean-targets.md
+++ b/website/docs/reference/project-configs/clean-targets.md
@@ -19,10 +19,10 @@ Optionally specify a custom list of directories to be removed by the `dbt clean`
If this configuration is not included in your `dbt_project.yml` file, the `clean` command will remove files in your [target-path](/reference/project-configs/target-path).
## Examples
-### Remove packages and compiled files as part of `dbt clean`
-:::info
-This is our preferred configuration, but is not the default.
-:::
+
+### Remove packages and compiled files as part of `dbt clean` (preferred) {#remove-packages-and-compiled-files-as-part-of-dbt-clean}
+
+
To remove packages as well as compiled files, include the value of your [packages-install-path](/reference/project-configs/packages-install-path) configuration in your `clean-targets` configuration.
diff --git a/website/docs/reference/project-configs/docs-paths.md b/website/docs/reference/project-configs/docs-paths.md
index 2aee7b31ee7..910cfbb0cce 100644
--- a/website/docs/reference/project-configs/docs-paths.md
+++ b/website/docs/reference/project-configs/docs-paths.md
@@ -20,12 +20,9 @@ Optionally specify a custom list of directories where [docs blocks](/docs/collab
By default, dbt will search in all resource paths for docs blocks (i.e. the combined list of [model-paths](/reference/project-configs/model-paths), [seed-paths](/reference/project-configs/seed-paths), [analysis-paths](/reference/project-configs/analysis-paths), [macro-paths](/reference/project-configs/macro-paths) and [snapshot-paths](/reference/project-configs/snapshot-paths)). If this option is configured, dbt will _only_ look in the specified directory for docs blocks.
-## Examples
-:::info
-We typically omit this configuration as we prefer dbt's default behavior.
-:::
+## Example
-### Use a subdirectory named `docs` for docs blocks
+Use a subdirectory named `docs` for docs blocks:
@@ -34,3 +31,5 @@ docs-paths: ["docs"]
```
+
+**Note:** We typically omit this configuration as we prefer dbt's default behavior.
diff --git a/website/docs/reference/project-configs/require-dbt-version.md b/website/docs/reference/project-configs/require-dbt-version.md
index 85a502bff60..6b17bb46120 100644
--- a/website/docs/reference/project-configs/require-dbt-version.md
+++ b/website/docs/reference/project-configs/require-dbt-version.md
@@ -19,7 +19,7 @@ When you set this configuration, dbt sends a helpful error message for any user
If this configuration is not specified, no version check will occur.
-:::info YAML Quoting
+### YAML quoting
This configuration needs to be interpolated by the YAML parser as a string. As such, you should quote the value of the configuration, taking care to avoid whitespace. For example:
```yml
@@ -32,8 +32,6 @@ require-dbt-version: >=1.0.0 # No quotes? No good
require-dbt-version: ">= 1.0.0" # Don't put whitespace after the equality signs
```
-:::
-
## Examples
@@ -73,18 +71,18 @@ require-dbt-version: ">=1.0.0,<2.0.0"
### Require a specific dbt version
-:::caution Not recommended
-With the release of major version 1.0 of dbt Core, pinning to a specific patch is discouraged.
-:::
+
+:::info Not recommended
+Pinning to a specific dbt version is discouraged because it limits project flexibility and can cause compatibility issues, especially with dbt packages. It's recommended to [pin to a major release](#pin-to-a-range), using a version range (for example, `">=1.0.0", "<2.0.0"`) for broader compatibility and to benefit from updates.
While you can restrict your project to run only with an exact version of dbt Core, we do not recommend this for dbt Core v1.0.0 and higher.
-In the following example, the project will only run with dbt v0.21.1.
+In the following example, the project will only run with dbt v1.5:
```yml
-require-dbt-version: 0.21.1
+require-dbt-version: 1.5
```
diff --git a/website/docs/reference/resource-configs/bigquery-configs.md b/website/docs/reference/resource-configs/bigquery-configs.md
index 8f323bc4236..94d06311c55 100644
--- a/website/docs/reference/resource-configs/bigquery-configs.md
+++ b/website/docs/reference/resource-configs/bigquery-configs.md
@@ -596,9 +596,9 @@ with events as (
-#### Copying ingestion-time partitions
+#### Copying partitions
-If you have configured your incremental model to use "ingestion"-based partitioning (`partition_by.time_ingestion_partitioning: True`), you can opt to use a legacy mechanism for inserting and overwriting partitions. While this mechanism doesn't offer the same visibility and ease of debugging as the SQL `merge` statement, it can yield significant savings in time and cost for large datasets. Behind the scenes, dbt will add or replace each partition via the [copy table API](https://cloud.google.com/bigquery/docs/managing-tables#copy-table) and partition decorators.
+If you are replacing entire partitions in your incremental runs, you can opt to do so with the [copy table API](https://cloud.google.com/bigquery/docs/managing-tables#copy-table) and partition decorators rather than a `merge` statement. While this mechanism doesn't offer the same visibility and ease of debugging as the SQL `merge` statement, it can yield significant savings in time and cost for large datasets because the copy table API does not incur any costs for inserting the data - it's equivalent to the `bq cp` gcloud command line interface (CLI) command.
You can enable this by switching on `copy_partitions: True` in the `partition_by` configuration. This approach works only in combination with "dynamic" partition replacement.
diff --git a/website/docs/reference/resource-configs/contract.md b/website/docs/reference/resource-configs/contract.md
index ccc10099a12..6c11b08dd62 100644
--- a/website/docs/reference/resource-configs/contract.md
+++ b/website/docs/reference/resource-configs/contract.md
@@ -6,16 +6,7 @@ default_value: {contract: false}
id: "contract"
---
-:::info New functionality
-This functionality is new in v1.5.
-:::
-
-## Related documentation
-- [What is a model contract?](/docs/collaborate/govern/model-contracts)
-- [Defining `columns`](/reference/resource-properties/columns)
-- [Defining `constraints`](/reference/resource-properties/constraints)
-
-# Definition
+Supported in dbt v1.5 and higher.
When the `contract` configuration is enforced, dbt will ensure that your model's returned dataset exactly matches the attributes you have defined in yaml:
- `name` and `data_type` for every column
@@ -120,3 +111,8 @@ Imagine:
- The result is a delta between the yaml-defined contract, and the actual table in the database - which means the contract is now incorrect!
Why `append_new_columns`, rather than `sync_all_columns`? Because removing existing columns is a breaking change for contracted models!
+
+## Related documentation
+- [What is a model contract?](/docs/collaborate/govern/model-contracts)
+- [Defining `columns`](/reference/resource-properties/columns)
+- [Defining `constraints`](/reference/resource-properties/constraints)
\ No newline at end of file
diff --git a/website/docs/reference/resource-configs/delimiter.md b/website/docs/reference/resource-configs/delimiter.md
index 58d6ba8344a..5cc5ddaf44b 100644
--- a/website/docs/reference/resource-configs/delimiter.md
+++ b/website/docs/reference/resource-configs/delimiter.md
@@ -4,19 +4,14 @@ datatype:
default_value: ","
---
+Supported in v1.7 and higher.
+
## Definition
You can use this optional seed configuration to customize how you separate values in a [seed](/docs/build/seeds) by providing the one-character string.
* The delimiter defaults to a comma when not specified.
* Explicitly set the `delimiter` configuration value if you want seed files to use a different delimiter, such as "|" or ";".
-
-:::info New in 1.7!
-
-Delimiter is new functionality available beginning with dbt Core v1.7.
-
-:::
-
## Usage
diff --git a/website/docs/reference/resource-configs/docs.md b/website/docs/reference/resource-configs/docs.md
index d5f7b6499d8..bb0f3714dd4 100644
--- a/website/docs/reference/resource-configs/docs.md
+++ b/website/docs/reference/resource-configs/docs.md
@@ -30,6 +30,7 @@ models:
[](/reference/resource-configs/resource-path):
+docs:
show: true | false
+ node_color: color_id # Use name (such as node_color: purple) or hex code with quotes (such as node_color: "#cd7f32")
```
@@ -44,7 +45,7 @@ models:
- name: model_name
docs:
show: true | false
- node_color: "black"
+ node_color: color_id # Use name (such as node_color: purple) or hex code with quotes (such as node_color: "#cd7f32")
```
@@ -67,7 +68,7 @@ seeds:
[](/reference/resource-configs/resource-path):
+docs:
show: true | false
-
+ node_color: color_id # Use name (such as node_color: purple) or hex code with quotes (such as node_color: "#cd7f32")
```
@@ -81,6 +82,7 @@ seeds:
- name: seed_name
docs:
show: true | false
+ node_color: color_id # Use name (such as node_color: purple) or hex code with quotes (such as node_color: "#cd7f32")
```