diff --git a/website/docs/docs/build/environment-variables.md b/website/docs/docs/build/environment-variables.md index 95242069ed9..4c0b785448e 100644 --- a/website/docs/docs/build/environment-variables.md +++ b/website/docs/docs/build/environment-variables.md @@ -105,7 +105,6 @@ dbt Cloud has a number of pre-defined variables built in. Variables are set auto The following environment variable is set automatically for the dbt Cloud IDE: - `DBT_CLOUD_GIT_BRANCH` — Provides the development Git branch name in the [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud). - - Available in dbt v1.6 and later. - The variable changes when the branch is changed. - Doesn't require restarting the IDE after a branch change. - Currently not available in the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation). diff --git a/website/docs/docs/build/packages.md b/website/docs/docs/build/packages.md index 82ba2c3d74c..7a2c08d3e70 100644 --- a/website/docs/docs/build/packages.md +++ b/website/docs/docs/build/packages.md @@ -161,7 +161,7 @@ Where `name: 'dbt_utils'` specifies the subfolder of `dbt_packages` that's creat ### Native private packages -dbt Cloud supports private packages from [supported](#prerequisites) Git repos leveraging an exisiting [configuration](/docs/cloud/git/git-configuration-in-dbt-cloud) in your environment. Previously, you had to configure a [token](#git-token-method) to retrieve packages from your private repos. +dbt Cloud supports private packages from [supported](#prerequisites) Git repos leveraging an existing [configuration](/docs/cloud/git/git-configuration-in-dbt-cloud) in your environment. Previously, you had to configure a [token](#git-token-method) to retrieve packages from your private repos. #### Prerequisites diff --git a/website/docs/docs/build/unit-tests.md b/website/docs/docs/build/unit-tests.md index fc4cf02b34f..a85ffa07ed2 100644 --- a/website/docs/docs/build/unit-tests.md +++ b/website/docs/docs/build/unit-tests.md @@ -8,11 +8,10 @@ keywords: - unit test, unit tests, unit testing, dag --- -:::note + + -Unit testing functionality is available in [dbt Cloud Release Tracks](/docs/dbt-versions/cloud-release-tracks) or dbt Core v1.8+ -::: Historically, dbt's test coverage was confined to [“data” tests](/docs/build/data-tests), assessing the quality of input data or resulting datasets' structure. However, these tests could only be executed _after_ building a model. diff --git a/website/docs/docs/cloud-integrations/about-snowflake-native-app.md b/website/docs/docs/cloud-integrations/about-snowflake-native-app.md index 86ee6a7d630..9eb1179897e 100644 --- a/website/docs/docs/cloud-integrations/about-snowflake-native-app.md +++ b/website/docs/docs/cloud-integrations/about-snowflake-native-app.md @@ -46,7 +46,7 @@ App users are able to access all information that's available to the API service ## Procurement The dbt Snowflake Native App is available on the [Snowflake Marketplace](https://app.snowflake.com/marketplace/listing/GZTYZSRT2R3). Purchasing it includes access to the Native App and a dbt Cloud account that's on the Enterprise plan. Existing dbt Cloud Enterprise customers can also access it. If interested, contact your Enterprise account manager. -If you're interested, please [contact us](matilto:sales_snowflake_marketplace@dbtlabs.com) for more information. +If you're interested, please [contact us](mailto:sales_snowflake_marketplace@dbtlabs.com) for more information. ## Support If you have any questions about the dbt Snowflake Native App, you may [contact our Support team](mailto:dbt-snowflake-marketplace@dbtlabs.com) for help. Please provide information about your installation of the Native App, including your dbt Cloud account ID and Snowflake account identifier. diff --git a/website/docs/docs/cloud-integrations/configure-auto-exposures.md b/website/docs/docs/cloud-integrations/configure-auto-exposures.md index 2bb09573221..9692249240a 100644 --- a/website/docs/docs/cloud-integrations/configure-auto-exposures.md +++ b/website/docs/docs/cloud-integrations/configure-auto-exposures.md @@ -6,7 +6,7 @@ description: "Import and auto-generate exposures from dashboards and understand image: /img/docs/cloud-integrations/auto-exposures/explorer-lineage2.jpg --- -# Configure auto-exposures +# Configure auto-exposures As a data team, it’s critical that you have context into the downstream use cases and users of your data products. [Auto-exposures](/docs/collaborate/auto-exposures) integrates natively with Tableau and [auto-generates downstream lineage](/docs/collaborate/auto-exposures#view-auto-exposures-in-dbt-explorer) in dbt Explorer for a richer experience. diff --git a/website/docs/docs/cloud-integrations/overview.md b/website/docs/docs/cloud-integrations/overview.md index 8334632a7f8..f5208c8d754 100644 --- a/website/docs/docs/cloud-integrations/overview.md +++ b/website/docs/docs/cloud-integrations/overview.md @@ -13,7 +13,7 @@ Many data applications integrate with dbt Cloud, enabling you to leverage the po
diff --git a/website/docs/docs/cloud/git/setup-azure.md b/website/docs/docs/cloud/git/setup-azure.md index 273660ba3dd..c6213b49453 100644 --- a/website/docs/docs/cloud/git/setup-azure.md +++ b/website/docs/docs/cloud/git/setup-azure.md @@ -155,7 +155,7 @@ The service user's permissions will also power which repositories a team can sel While it's common to enforce multi-factor authentication (MFA) for normal user accounts, service user authentication must not need an extra factor. If you enable a second factor for the service user, this can interrupt production runs and cause a failure to clone the repository. In order for the OAuth access token to work, the best practice is to remove any more burden of proof of identity for service users. -As a result, MFA must be explicity disabled in the Office 365 or Microsoft Entra ID administration panel for the service user. Just having it "un-connected" will not be sufficient, as dbt Cloud will be prompted to set up MFA instead of allowing the credentials to be used as intended. +As a result, MFA must be explicitly disabled in the Office 365 or Microsoft Entra ID administration panel for the service user. Just having it "un-connected" will not be sufficient, as dbt Cloud will be prompted to set up MFA instead of allowing the credentials to be used as intended. **To disable MFA for a single user using the Office 365 Administration console:** diff --git a/website/docs/docs/cloud/manage-access/external-oauth.md b/website/docs/docs/cloud/manage-access/external-oauth.md index 380d0a3d1cc..c25b44d1513 100644 --- a/website/docs/docs/cloud/manage-access/external-oauth.md +++ b/website/docs/docs/cloud/manage-access/external-oauth.md @@ -144,7 +144,7 @@ Adjust the other settings as needed to meet your organization's configurations i 1. Navigate back to the dbt Cloud **Account settings** —> **Integrations** page you were on at the beginning. It’s time to start filling out all of the fields. 1. `Integration name`: Give the integration a descriptive name that includes identifying information about the Okta environment so future users won’t have to guess where it belongs. 2. `Client ID` and `Client secrets`: Retrieve these from the Okta application page. - + 3. Authorize URL and Token URL: Found in the metadata URI. diff --git a/website/docs/docs/cloud/manage-access/invite-users.md b/website/docs/docs/cloud/manage-access/invite-users.md index 0922b4dc991..b9a12bae7c6 100644 --- a/website/docs/docs/cloud/manage-access/invite-users.md +++ b/website/docs/docs/cloud/manage-access/invite-users.md @@ -66,7 +66,7 @@ Once the user completes this process, their email and user information will popu * Is there a limit to the number of users I can invite? _Your ability to invite users is limited to the number of licenses you have available._ * Why are users are clicking the invitation link and getting an `Invalid Invitation Code` error? _We have seen scenarios where embedded secure link technology (such as enterprise Outlooks [Safe Link](https://learn.microsoft.com/en-us/microsoft-365/security/office-365-security/safe-links-about?view=o365-worldwide) feature) can result in errors when clicking on the email link. Be sure to include the `getdbt.com` URL in the allowlists for these services._ -* Can I have a mixure of users with SSO and username/password authentication? _Once SSO is enabled, you will no longer be able to add local users. If you have contractors or similar contingent workers, we recommend you add them to your SSO service._ +* Can I have a mixture of users with SSO and username/password authentication? _Once SSO is enabled, you will no longer be able to add local users. If you have contractors or similar contingent workers, we recommend you add them to your SSO service._ * What happens if I need to resend the invitation? _From the Users page, click on the invite record, and you will be presented with the option to resend the invitation._ * What can I do if I entered an email address incorrectly? _From the Users page, click on the invite record, and you will be presented with the option to revoke it. Once revoked, generate a new invitation to the correct email address._ diff --git a/website/docs/docs/cloud/manage-access/mfa.md b/website/docs/docs/cloud/manage-access/mfa.md index bcddc04f072..644fcdb32c2 100644 --- a/website/docs/docs/cloud/manage-access/mfa.md +++ b/website/docs/docs/cloud/manage-access/mfa.md @@ -58,7 +58,7 @@ Choose the next steps based on your preferred enrollment selection: 2. Follow the instructions in the modal window and click **Use security key**. - + 3. Scan the QR code or insert and touch activate your USB key to begin the process. Follow the on-screen prompts. diff --git a/website/docs/docs/collaborate/auto-exposures.md b/website/docs/docs/collaborate/auto-exposures.md index 495906cee75..a333df19831 100644 --- a/website/docs/docs/collaborate/auto-exposures.md +++ b/website/docs/docs/collaborate/auto-exposures.md @@ -7,7 +7,7 @@ pagination_next: "docs/collaborate/data-tile" image: /img/docs/cloud-integrations/auto-exposures/explorer-lineage.jpg --- -# Auto-exposures +# Auto-exposures As a data team, it’s critical that you have context into the downstream use cases and users of your data products. Auto-exposures integrate natively with Tableau (Power BI coming soon) and auto-generate downstream lineage in dbt Explorer for a richer experience. diff --git a/website/docs/docs/collaborate/data-tile.md b/website/docs/docs/collaborate/data-tile.md index 0edd9d7c44e..077a4f5a740 100644 --- a/website/docs/docs/collaborate/data-tile.md +++ b/website/docs/docs/collaborate/data-tile.md @@ -63,7 +63,7 @@ Follow these steps to set up your data health tile: 6. Navigate back to dbt Explorer and select an exposure. 7. Below the **Data health** section, expand on the toggle for instructions on how to embed the exposure tile (if you're an account admin with develop permissions). 8. In the expanded toggle, you'll see a text field where you can paste your **Metadata Only token**. - + 9. Once you’ve pasted your token, you can select either **URL** or **iFrame** depending on which you need to add to your dashboard. diff --git a/website/docs/docs/collaborate/explore-multiple-projects.md b/website/docs/docs/collaborate/explore-multiple-projects.md index b15e133a49e..3a0cce8a9e6 100644 --- a/website/docs/docs/collaborate/explore-multiple-projects.md +++ b/website/docs/docs/collaborate/explore-multiple-projects.md @@ -27,7 +27,7 @@ When viewing a downstream (child) project that imports and refs public models fr - Clicking on a model opens a side panel containing general information about the model, such as the specific dbt Cloud project that produces that model, description, package, and more. - Double-clicking on a model from another project opens the resource-level lineage graph of the parent project, if you have the permissions to do so. - + ## Explore the project-level lineage graph diff --git a/website/docs/docs/collaborate/govern/model-versions.md b/website/docs/docs/collaborate/govern/model-versions.md index 0bd16a03b3a..35bb7e047c8 100644 --- a/website/docs/docs/collaborate/govern/model-versions.md +++ b/website/docs/docs/collaborate/govern/model-versions.md @@ -14,7 +14,7 @@ This functionality is new in v1.5 — if you have thoughts, participate in [the -import VersionsCallout from '/snippets/_version-callout.md'; +import VersionsCallout from '/snippets/_model-version-callout.md'; diff --git a/website/docs/docs/core/connect-data-platform/azuresynapse-setup.md b/website/docs/docs/core/connect-data-platform/azuresynapse-setup.md index 0a0347df9ea..0c22209d75c 100644 --- a/website/docs/docs/core/connect-data-platform/azuresynapse-setup.md +++ b/website/docs/docs/core/connect-data-platform/azuresynapse-setup.md @@ -55,7 +55,7 @@ Microsoft made several changes related to connection encryption. Read more about ### Authentication methods This adapter is based on the adapter for Microsoft SQL Server. -Therefor, the same authentication methods are supported. +Therefore, the same authentication methods are supported. The configuration is the same except for 1 major difference: instead of specifying `type: sqlserver`, you specify `type: synapse`. diff --git a/website/docs/docs/core/connect-data-platform/bigquery-setup.md b/website/docs/docs/core/connect-data-platform/bigquery-setup.md index 8b1867ef620..bfa99f21a6d 100644 --- a/website/docs/docs/core/connect-data-platform/bigquery-setup.md +++ b/website/docs/docs/core/connect-data-platform/bigquery-setup.md @@ -388,6 +388,28 @@ my-profile: execution_project: buck-stops-here-456 ``` +### Quota project + +By default, dbt will use the `quota_project_id` set within the credentials of the account you are using to authenticate to BigQuery. + +Optionally, you may specify `quota_project` to bill for query execution instead of the default quota project specified for the account from the environment. + +This can sometimes be required when impersonating service accounts that do not have the BigQuery API enabled within the project in which they are defined. Without overriding the quota project, it will fail to connect. + +If you choose to set a quota project, the account you use to authenticate must have the `Service Usage Consumer` role on that project. + +```yaml +my-profile: + target: dev + outputs: + dev: + type: bigquery + method: oauth + project: abc-123 + dataset: my_dataset + quota_project: my-bq-quota-project +``` + ### Running Python models on Dataproc import BigQueryDataproc from '/snippets/_bigquery-dataproc.md'; diff --git a/website/docs/docs/core/connect-data-platform/dremio-setup.md b/website/docs/docs/core/connect-data-platform/dremio-setup.md index 69f2b14fc4f..7ac304bba2b 100644 --- a/website/docs/docs/core/connect-data-platform/dremio-setup.md +++ b/website/docs/docs/core/connect-data-platform/dremio-setup.md @@ -3,14 +3,14 @@ title: "Dremio setup" description: "Read this guide to learn about the Dremio warehouse setup in dbt." meta: maintained_by: Dremio - authors: 'Dremio (formerly Fabrice Etanchaud)' + authors: 'Dremio' github_repo: 'dremio/dbt-dremio' pypi_package: 'dbt-dremio' - min_core_version: 'v1.2.0' + min_core_version: 'v1.8.0' cloud_support: Not Supported min_supported_version: 'Dremio 22.0' - slack_channel_name: 'n/a' - slack_channel_link: 'https://www.getdbt.com/community' + slack_channel_name: 'db-dremio' + slack_channel_link: '[https://www.getdbt.com/community](https://getdbt.slack.com/archives/C049G61TKBK)' platform_name: 'Dremio' config_page: '/reference/resource-configs/no-configs' --- @@ -36,10 +36,6 @@ Before connecting from project to Dremio Cloud, follow these prerequisite steps: * Ensure that you are using version 22.0 or later. * Ensure that Python 3.9.x or later is installed on the system that you are running dbt on. -* Enable these support keys in your Dremio cluster: - * `dremio.iceberg.enabled` - * `dremio.iceberg.ctas.enabled` - * `dremio.execution.support_unlimited_splits` See Support Keys in the Dremio documentation for the steps. * If you want to use TLS to secure the connection between dbt and Dremio Software, configure full wire encryption in your Dremio cluster. For instructions, see Configuring Wire Encryption. @@ -84,7 +80,7 @@ For descriptions of the configurations in these profiles, see [Configurations](# [project name]: outputs: dev: - cloud_host: https://api.dremio.cloud + cloud_host: api.dremio.cloud cloud_project_id: [project ID] object_storage_source: [name] object_storage_path: [path] @@ -161,7 +157,7 @@ For descriptions of the configurations in these profiles, see [Configurations](# | Configuration | Required? | Default Value | Description | | --- | --- | --- | --- | -| `cloud_host` | Yes | `https://api.dremio.cloud` | US Control Plane: `https://api.dremio.cloud`

EU Control Plane: `https://api.eu.dremio.cloud` | +| `cloud_host` | Yes | `api.dremio.cloud` | US Control Plane: `api.dremio.cloud`

EU Control Plane: `api.eu.dremio.cloud` | | `user` | Yes | None | Email address used as a username in Dremio Cloud | | `pat` | Yes | None | The personal access token to use for authentication. See [Personal Access Tokens](https://docs.dremio.com/cloud/security/authentication/personal-access-token/) for instructions about obtaining a token. | | `cloud_project_id` | Yes | None | The ID of the Sonar project in which to run transformations. | diff --git a/website/docs/docs/core/connect-data-platform/ibmdb2-setup.md b/website/docs/docs/core/connect-data-platform/ibmdb2-setup.md index 692342466b0..c9c91d3ef5b 100644 --- a/website/docs/docs/core/connect-data-platform/ibmdb2-setup.md +++ b/website/docs/docs/core/connect-data-platform/ibmdb2-setup.md @@ -65,7 +65,7 @@ your_profile_name: | type | The specific adapter to use | Required | `ibmdb2` | | schema | Specify the schema (database) to build models into | Required | `analytics` | | database | Specify the database you want to connect to | Required | `testdb` | -| host | Hostname or IP-adress | Required | `localhost` | +| host | Hostname or IP-address | Required | `localhost` | | port | The port to use | Optional | `50000` | | protocol | Protocol to use | Optional | `TCPIP` | | username | The username to use to connect to the server | Required | `my-username` | diff --git a/website/docs/docs/core/connect-data-platform/layer-setup.md b/website/docs/docs/core/connect-data-platform/layer-setup.md index 051094297a2..9514d6bb9e6 100644 --- a/website/docs/docs/core/connect-data-platform/layer-setup.md +++ b/website/docs/docs/core/connect-data-platform/layer-setup.md @@ -83,7 +83,7 @@ _Parameters:_ | Syntax | Description | | --------- |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `MODEL_TYPE` | Type of the model your want to train. There are two options:
- `classifier`: A model to predict classes/labels or categories such as spam detection
- `regressor`: A model to predict continious outcomes such as CLV prediction. | +| `MODEL_TYPE` | Type of the model your want to train. There are two options:
- `classifier`: A model to predict classes/labels or categories such as spam detection
- `regressor`: A model to predict continuous outcomes such as CLV prediction. | | `FEATURES` | Input column names as a list to train your AutoML model. | | `TARGET` | Target column that you want to predict. | diff --git a/website/docs/docs/core/connect-data-platform/postgres-setup.md b/website/docs/docs/core/connect-data-platform/postgres-setup.md index b6f34a00e0b..ef6b42d6236 100644 --- a/website/docs/docs/core/connect-data-platform/postgres-setup.md +++ b/website/docs/docs/core/connect-data-platform/postgres-setup.md @@ -68,7 +68,7 @@ The `role` config controls the Postgres role that dbt assumes when opening new c #### sslmode -The `sslmode` config controls how dbt connectes to Postgres databases using SSL. See [the Postgres docs](https://www.postgresql.org/docs/9.1/libpq-ssl.html) on `sslmode` for usage information. When unset, dbt will connect to databases using the Postgres default, `prefer`, as the `sslmode`. +The `sslmode` config controls how dbt connects to Postgres databases using SSL. See [the Postgres docs](https://www.postgresql.org/docs/9.1/libpq-ssl.html) on `sslmode` for usage information. When unset, dbt will connect to databases using the Postgres default, `prefer`, as the `sslmode`. #### sslcert @@ -99,7 +99,7 @@ If `dbt-postgres` encounters an operational error or timeout when opening a new `psycopg2-binary` is installed by default when installing `dbt-postgres`. Installing `psycopg2-binary` uses a pre-built version of `psycopg2` which may not be optimized for your particular machine. This is ideal for development and testing workflows where performance is less of a concern and speed and ease of install is more important. -However, production environments will benefit from a version of `psycopg2` which is built from source for your particular operating system and archtecture. In this scenario, speed and ease of install is less important as the on-going usage is the focus. +However, production environments will benefit from a version of `psycopg2` which is built from source for your particular operating system and architecture. In this scenario, speed and ease of install is less important as the on-going usage is the focus. diff --git a/website/docs/docs/core/connect-data-platform/spark-setup.md b/website/docs/docs/core/connect-data-platform/spark-setup.md index 611642e91b7..97bba29e66e 100644 --- a/website/docs/docs/core/connect-data-platform/spark-setup.md +++ b/website/docs/docs/core/connect-data-platform/spark-setup.md @@ -25,7 +25,7 @@ import SetUpPages from '/snippets/_setup-pages-intro.md'; -If connecting to Databricks via ODBC driver, it requires `pyodbc`. Depending on your system, you can install it seperately or via pip. See the [`pyodbc` wiki](https://github.com/mkleehammer/pyodbc/wiki/Install) for OS-specific installation details. +If connecting to Databricks via ODBC driver, it requires `pyodbc`. Depending on your system, you can install it separately or via pip. See the [`pyodbc` wiki](https://github.com/mkleehammer/pyodbc/wiki/Install) for OS-specific installation details. If connecting to a Spark cluster via the generic thrift or http methods, it requires `PyHive`. diff --git a/website/docs/docs/core/connect-data-platform/upsolver-setup.md b/website/docs/docs/core/connect-data-platform/upsolver-setup.md index 8e4203e0b0c..164d46ee8af 100644 --- a/website/docs/docs/core/connect-data-platform/upsolver-setup.md +++ b/website/docs/docs/core/connect-data-platform/upsolver-setup.md @@ -10,7 +10,7 @@ meta: min_core_version: 'v1.5.0' cloud_support: Not Supported min_supported_version: 'n/a' - slack_channel_name: 'Upsolver Comunity' + slack_channel_name: 'Upsolver Community' slack_channel_link: 'https://join.slack.com/t/upsolvercommunity/shared_invite/zt-1zo1dbyys-hj28WfaZvMh4Z4Id3OkkhA' platform_name: 'Upsolver' config_page: '/reference/resource-configs/upsolver-configs' diff --git a/website/docs/docs/core/pip-install.md b/website/docs/docs/core/pip-install.md index 6d94d92a64b..fa16ca13536 100644 --- a/website/docs/docs/core/pip-install.md +++ b/website/docs/docs/core/pip-install.md @@ -29,9 +29,9 @@ dbt-env\Scripts\activate # activate the environment for Windows #### Create an alias -To activate your dbt environment with every new shell window or session, you can create an alias for the source command in your $HOME/.bashrc, $HOME/.zshrc, or whichever config file your shell draws from. +To activate your dbt environment with every new shell window or session, you can create an alias for the source command in your `$HOME/.bashrc`, `$HOME/.zshrc`, or whichever config file your shell draws from. -For example, add the following to your rc file, replacing <PATH_TO_VIRTUAL_ENV_CONFIG> with the path to your virtual environment configuration. +For example, add the following to your rc file, replacing `` with the path to your virtual environment configuration. ```shell alias env_dbt='source /bin/activate' diff --git a/website/docs/docs/dbt-cloud-apis/authentication.md b/website/docs/docs/dbt-cloud-apis/authentication.md index 43a08d84fd7..e817512c1fc 100644 --- a/website/docs/docs/dbt-cloud-apis/authentication.md +++ b/website/docs/docs/dbt-cloud-apis/authentication.md @@ -31,7 +31,7 @@ pagination_prev: null You should use service tokens broadly for any production workflow where you need a service account. You should use PATs only for developmental workflows _or_ dbt Cloud client workflows that require user context. The following examples show you when to use a personal access token (PAT) or a service token: -* **Connecting a partner integration to dbt Cloud** — Some examples include the [dbt Semantic Layer Google Sheets integration](/docs/cloud-integrations/avail-sl-integrations), Hightouch, Datafold, a custom app you’ve created, etc. These types of integrations should use a service token instead of a PAT because service tokens give you visibility, and you can scope them to only what the integration needs and ensure the least privilege. We highly recommend switching to a service token if you’re using a personal acess token for these integrations today. +* **Connecting a partner integration to dbt Cloud** — Some examples include the [dbt Semantic Layer Google Sheets integration](/docs/cloud-integrations/avail-sl-integrations), Hightouch, Datafold, a custom app you’ve created, etc. These types of integrations should use a service token instead of a PAT because service tokens give you visibility, and you can scope them to only what the integration needs and ensure the least privilege. We highly recommend switching to a service token if you’re using a personal access token for these integrations today. * **Production Terraform** — Use a service token since this is a production workflow and is acting as a service account and not a user account. * **Cloud CLI** — Use a PAT since the dbt Cloud CLI works within the context of a user (the user is making the requests and has to operate within the context of their user account). * **Testing a custom script and staging Terraform or Postman** — We recommend using a PAT as this is a developmental workflow and is scoped to the user making the changes. When you push this script or Terraform into production, use a service token instead. diff --git a/website/docs/docs/dbt-versions/2022-release-notes.md b/website/docs/docs/dbt-versions/2022-release-notes.md index b46c259a6d8..f180f664372 100644 --- a/website/docs/docs/dbt-versions/2022-release-notes.md +++ b/website/docs/docs/dbt-versions/2022-release-notes.md @@ -51,7 +51,7 @@ packages: -## Novemver 2022 +## November 2022 ### The dbt Cloud + Databricks experience is getting even better @@ -241,4 +241,4 @@ We started the new year with a gift! Multi-tenant Team and Enterprise accounts c #### Performance improvements and enhancements -* We added client-side naming validation for file or folder creation. \ No newline at end of file +* We added client-side naming validation for file or folder creation. diff --git a/website/docs/docs/dbt-versions/2023-release-notes.md b/website/docs/docs/dbt-versions/2023-release-notes.md index ec635a051dc..4dd10c36b5c 100644 --- a/website/docs/docs/dbt-versions/2023-release-notes.md +++ b/website/docs/docs/dbt-versions/2023-release-notes.md @@ -35,7 +35,7 @@ Archived release notes for dbt Cloud from 2023 To learn more, refer to [Extended attributes](/docs/dbt-cloud-environments#extended-attributes). - The **Extended Atrributes** text box is available from your environment's settings page: + The **Extended Attributes** text box is available from your environment's settings page: @@ -183,7 +183,7 @@ Archived release notes for dbt Cloud from 2023 Previously in dbt Cloud, you could only rerun an errored job from start but now you can also rerun it from its point of failure. - You can view which job failed to complete successully, which command failed in the run step, and choose how to rerun it. To learn more, refer to [Retry jobs](/docs/deploy/retry-jobs). + You can view which job failed to complete successfully, which command failed in the run step, and choose how to rerun it. To learn more, refer to [Retry jobs](/docs/deploy/retry-jobs). @@ -812,7 +812,7 @@ Archived release notes for dbt Cloud from 2023 -- +- The dbt Cloud Scheduler now prevents queue clog by canceling unnecessary runs of over-scheduled jobs. diff --git a/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md b/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md index 9a4712af528..2a4a9d96528 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md +++ b/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md @@ -92,7 +92,7 @@ You can read more about each of these behavior changes in the following links: - (Introduced, disabled by default) [`skip_nodes_if_on_run_start_fails` project config flag](/reference/global-configs/behavior-changes#behavior-change-flags). If the flag is set and **any** `on-run-start` hook fails, mark all selected nodes as skipped. - `on-run-start/end` hooks are **always** run, regardless of whether they passed or failed last time. - (Introduced, disabled by default) [[Redshift] `restrict_direct_pg_catalog_access`](/reference/global-configs/behavior-changes#redshift-restrict_direct_pg_catalog_access). If the flag is set the adapter will use the Redshift API (through the Python client) if available, or query Redshift's `information_schema` tables instead of using `pg_` tables. -- (Introduced, disabled by default) [`require_nested_cumulative_type_params`](/reference/global-configs/behavior-changes#cumulative-metrics). If the flag is set to `True`, users will receive an error instead of a warning if they're not proprly formatting cumulative metrics using the new [`cumulative_type_params`](/docs/build/cumulative#parameters) nesting. +- (Introduced, disabled by default) [`require_nested_cumulative_type_params`](/reference/global-configs/behavior-changes#cumulative-metrics). If the flag is set to `True`, users will receive an error instead of a warning if they're not properly formatting cumulative metrics using the new [`cumulative_type_params`](/docs/build/cumulative#parameters) nesting. - (Introduced, disabled by default) [`require_batched_execution_for_custom_microbatch_strategy`](/reference/global-configs/behavior-changes#custom-microbatch-strategy). Set to `True` if you use a custom microbatch macro to enable batched execution. If you don't have a custom microbatch macro, you don't need to set this flag as dbt will handle microbatching automatically for any model using the microbatch strategy. ## Adapter specific features and functionalities diff --git a/website/docs/docs/dbt-versions/core-upgrade/10-upgrading-to-v1.5.md b/website/docs/docs/dbt-versions/core-upgrade/10-upgrading-to-v1.5.md index 6139cdcfc6f..11c78bd4bfa 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/10-upgrading-to-v1.5.md +++ b/website/docs/docs/dbt-versions/core-upgrade/10-upgrading-to-v1.5.md @@ -110,7 +110,7 @@ The built-in [collect_freshness](https://github.com/dbt-labs/dbt-core/blob/1.5.l {{ return(load_result('collect_freshness')) }} ``` -Finally: The [built-in `generate_alias_name` macro](https://github.com/dbt-labs/dbt-core/blob/1.5.latest/core/dbt/include/global_project/macros/get_custom_name/get_custom_alias.sql) now includes logic to handle versioned models. If your project has reimplemented the `generate_alias_name` macro with custom logic, and you want to start using [model versions](/docs/collaborate/govern/model-versions), you will need to update the logic in your macro. Note that, while this is **not** a prerequisite for upgrading to v1.5—only for using the new feature—we recommmend that you do this during your upgrade, whether you're planning to use model versions tomorrow or far in the future. +Finally: The [built-in `generate_alias_name` macro](https://github.com/dbt-labs/dbt-core/blob/1.5.latest/core/dbt/include/global_project/macros/get_custom_name/get_custom_alias.sql) now includes logic to handle versioned models. If your project has reimplemented the `generate_alias_name` macro with custom logic, and you want to start using [model versions](/docs/collaborate/govern/model-versions), you will need to update the logic in your macro. Note that, while this is **not** a prerequisite for upgrading to v1.5—only for using the new feature—we recommend that you do this during your upgrade, whether you're planning to use model versions tomorrow or far in the future. Likewise, if your project has reimplemented the `ref` macro with custom logic, you will need to update the logic in your macro as described [here](https://docs.getdbt.com/reference/dbt-jinja-functions/builtins). diff --git a/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-16-0.md b/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-16-0.md index d6fc6f9f49a..d610cdb4455 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-16-0.md +++ b/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-16-0.md @@ -80,7 +80,7 @@ The `snowflake__list_schemas` macro should now return an Agate dataframe with a column named `"name"`. If you are overriding the `snowflake__list_schemas` macro in your project, you can find more information about this change in [this pull request](https://github.com/dbt-labs/dbt-core/pull/2171). -### Snowflake databases wih 10,000 schemas +### Snowflake databases with 10,000 schemas dbt no longer supports running against Snowflake databases containing more than 10,000 schemas. This is due limitations of the `show schemas in database` query that dbt now uses to find schemas in a Snowflake database. If your dbt project diff --git a/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-17-0.md b/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-17-0.md index 6a19bdcf808..00d6a70bd05 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-17-0.md +++ b/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-17-0.md @@ -237,7 +237,7 @@ modules, please be mindful of the following changes to dbt's Python dependencies: Core: -- Pinned `Jinja2` depdendency to `2.11.2` +- Pinned `Jinja2` dependency to `2.11.2` - Pinned `hologram` to `0.0.7` - Require Python >= `3.6.3` diff --git a/website/docs/docs/dbt-versions/release-notes.md b/website/docs/docs/dbt-versions/release-notes.md index b63ff55234d..9b2205e46d8 100644 --- a/website/docs/docs/dbt-versions/release-notes.md +++ b/website/docs/docs/dbt-versions/release-notes.md @@ -20,6 +20,7 @@ Release notes are grouped by month for both multi-tenant and virtual private clo ## December 2024 +- **New**: [Auto exposures](/docs/collaborate/auto-exposures) are now generally available to dbt Cloud Enterprise plans. Auto-exposures integrate natively with Tableau (Power BI coming soon) and auto-generate downstream lineage in dbt Explorer for a richer experience. - **New**: The dbt Semantic Layer supports Sigma as a [partner integration](/docs/cloud-integrations/avail-sl-integrations), available in Preview. Refer to [Sigma](https://help.sigmacomputing.com/docs/configure-a-dbt-semantic-layer-integration) for more information. - **New**: The dbt Semantic Layer now supports Azure Single-tenant deployments. Refer to [Set up the dbt Semantic Layer](/docs/use-dbt-semantic-layer/setup-sl) for more information on how to get started. - **Fix**: Resolved intermittent issues in Single-tenant environments affecting Semantic Layer and query history. diff --git a/website/docs/docs/dbt-versions/release-notes/98-dbt-cloud-changelog-2021.md b/website/docs/docs/dbt-versions/release-notes/98-dbt-cloud-changelog-2021.md index 996229807a1..f4ea44c6b95 100644 --- a/website/docs/docs/dbt-versions/release-notes/98-dbt-cloud-changelog-2021.md +++ b/website/docs/docs/dbt-versions/release-notes/98-dbt-cloud-changelog-2021.md @@ -326,7 +326,7 @@ Rolling out a few long-term bets to ensure that our beloved dbt Cloud does not f - Fix NoSuchKey error - Guarantee unique notification settings per account, user, and type - Fix for account notification settings -- Dont show deleted projects on notifications page +- Don't show deleted projects on notifications page - Fix unicode error while decoding last_chunk - Show more relevant errors to customers - Groups are now editable by non-sudo requests diff --git a/website/docs/docs/dbt-versions/release-notes/99-dbt-cloud-changelog-2019-2020.md b/website/docs/docs/dbt-versions/release-notes/99-dbt-cloud-changelog-2019-2020.md index a6b68cf9d51..32a33d95301 100644 --- a/website/docs/docs/dbt-versions/release-notes/99-dbt-cloud-changelog-2019-2020.md +++ b/website/docs/docs/dbt-versions/release-notes/99-dbt-cloud-changelog-2019-2020.md @@ -464,7 +464,7 @@ This release adds a new version of dbt (0.16.1), fixes a number of IDE bugs, and - Fixed issue preventing temporary PR schemas from being dropped when PR is closed. - Fix issues with IDE tabs not updating query compile and run results. - Fix issues with query runtime timer in IDE for compile and run query functions. -- Fixed what settings are displayed on the account settings page to allign with the user's permissions. +- Fixed what settings are displayed on the account settings page to align with the user's permissions. - Fixed bug with checking user's permissions in frontend when user belonged to more than one project. - Fixed bug with access control around environments and file system/git interactions that occurred when using IDE. - Fixed a bug with Environments too generously matching repository. diff --git a/website/docs/docs/deploy/ci-jobs.md b/website/docs/docs/deploy/ci-jobs.md index 38bfb56a728..08c7813bfd3 100644 --- a/website/docs/docs/deploy/ci-jobs.md +++ b/website/docs/docs/deploy/ci-jobs.md @@ -190,6 +190,20 @@ To validate _all_ semantic nodes in your project, add the following command to d + + +dbt Cloud won't trigger a CI job run if the latest commit in a pull or merge request has already triggered a run for that job. However, some providers (like GitHub) will enforce the result of the existing run on multiple pull/merge requests. + +Scenarios where dbt Cloud does not trigger a CI job with Azure DevOps: + +1. Reusing a branch in a new PR + - If you abandon a previous PR (PR 1) that triggered a CI job for the same branch (`feature-123`) merging into `main`, and then open a new PR (PR 2) with the same branch merging into`main` — dbt Cloud won't trigger a new CI job for PR 2. + +2. Reusing the same commit + - If you create a new PR (PR 2) on the same commit (`#4818ceb`) as a previous PR (PR 1) that triggered a CI job — dbt Cloud won't trigger a new CI job for PR 2. + + + If your temporary schemas aren't dropping after a PR merges or closes, this typically indicates one of these issues: - You have overridden the generate_schema_name macro and it isn't using dbt_cloud_pr_ as the prefix. diff --git a/website/docs/docs/deploy/merge-jobs.md b/website/docs/docs/deploy/merge-jobs.md index a187e3992f8..e148498ed01 100644 --- a/website/docs/docs/deploy/merge-jobs.md +++ b/website/docs/docs/deploy/merge-jobs.md @@ -20,7 +20,7 @@ By using CD in dbt Cloud, you can take advantage of deferral to build only the e 1. On your deployment environment page, click **Create job** > **Merge job**. 1. Options in the **Job settings** section: - **Job name** — Specify the name for the merge job. - - **Description** — Provide a descripion about the job. + - **Description** — Provide a description about the job. - **Environment** — By default, it’s set to the environment you created the job from. 1. In the **Git trigger** section, the **Run on merge** option is enabled by default. Every time a PR merges (to a base branch configured in the environment) in your Git repo, this job will get triggered to run. diff --git a/website/docs/docs/deploy/webhooks.md b/website/docs/docs/deploy/webhooks.md index 52ce2a1fe56..4ff9c350344 100644 --- a/website/docs/docs/deploy/webhooks.md +++ b/website/docs/docs/deploy/webhooks.md @@ -217,7 +217,7 @@ GET https://{your access URL}/api/v3/accounts/{account_id}/webhooks/subscription { "id": "wsu_12345abcde", "account_identifier": "act_12345abcde", - "name": "Notication Webhook", + "name": "Notification Webhook", "description": "Webhook used to trigger notifications in Slack", "job_ids": [], "event_types": [ diff --git a/website/docs/reference/global-configs/version-compatibility.md b/website/docs/reference/global-configs/version-compatibility.md index 7667dcfda9c..b362add9842 100644 --- a/website/docs/reference/global-configs/version-compatibility.md +++ b/website/docs/reference/global-configs/version-compatibility.md @@ -14,7 +14,7 @@ Running with dbt=1.0.0 Found 13 models, 2 tests, 1 archives, 0 analyses, 204 macros, 2 operations.... ``` -:::info dbt Cloud release tracks +:::note dbt Cloud release tracks ::: diff --git a/website/docs/reference/project-configs/version.md b/website/docs/reference/project-configs/version.md index 890ad8542a7..54df6bfcb31 100644 --- a/website/docs/reference/project-configs/version.md +++ b/website/docs/reference/project-configs/version.md @@ -4,7 +4,7 @@ required: True keyword: project version, project versioning, dbt project versioning --- -import VersionsCallout from '/snippets/_version-callout.md'; +import VersionsCallout from '/snippets/_model-version-callout.md'; diff --git a/website/docs/reference/resource-configs/alias.md b/website/docs/reference/resource-configs/alias.md index 5beaa238806..16a8a392e06 100644 --- a/website/docs/reference/resource-configs/alias.md +++ b/website/docs/reference/resource-configs/alias.md @@ -101,7 +101,7 @@ seeds: -Configure a snapshots's alias in your `dbt_project.yml` file or config block. +Configure a snapshots's alias in your `dbt_project.yml` file, `snapshots/snapshot_name.yml` file, or config block. The following examples demonstrate how to `alias` a snapshot named `your_snapshot` to `the_best_snapshot`. @@ -117,18 +117,17 @@ snapshots: ``` -In the `snapshots/properties.yml` file: +In the `snapshots/snapshot_name.yml` file: - + ```yml version: 2 snapshots: - - name: your_snapshot + - name: your_snapshot_name config: alias: the_best_snapshot -``` In `snapshots/your_snapshot.sql` file: @@ -185,11 +184,12 @@ In `tests/unique_order_id_test.sql` file: ```sql {{ config( alias="unique_order_id_test", - severity="error", + severity="error" +) }} ``` -When using [`store_failures_as`](/reference/resource-configs/store_failures_as), this would return the name `analytics.finance.orders_order_id_unique_order_id_test` in the database. +When using [`store_failures_as`](/reference/resource-configs/store_failures_as), this would return the name `analytics.dbt_test__audit.orders_order_id_unique_order_id_test` in the database. diff --git a/website/docs/reference/resource-configs/batch_size.md b/website/docs/reference/resource-configs/batch_size.md index 4001545778a..0110da53bb2 100644 --- a/website/docs/reference/resource-configs/batch_size.md +++ b/website/docs/reference/resource-configs/batch_size.md @@ -7,7 +7,7 @@ description: "dbt uses `batch_size` to determine how large batches are when runn datatype: hour | day | month | year --- -Available in the [dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks) and dbt Core v1.9 and higher. + ## Definition diff --git a/website/docs/reference/resource-configs/begin.md b/website/docs/reference/resource-configs/begin.md index dd47419be21..924cdfbcfad 100644 --- a/website/docs/reference/resource-configs/begin.md +++ b/website/docs/reference/resource-configs/begin.md @@ -7,7 +7,7 @@ description: "dbt uses `begin` to determine when a microbatch incremental model datatype: string --- -Available in the [dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks) and dbt Core v1.9 and higher. + ## Definition diff --git a/website/docs/reference/resource-configs/contract.md b/website/docs/reference/resource-configs/contract.md index 18266ec672f..bd1fceb4e9b 100644 --- a/website/docs/reference/resource-configs/contract.md +++ b/website/docs/reference/resource-configs/contract.md @@ -6,8 +6,6 @@ default_value: {enforced: false} id: "contract" --- -Supported in dbt v1.5 and higher. - When the `contract` configuration is enforced, dbt will ensure that your model's returned dataset exactly matches the attributes you have defined in yaml: - `name` and `data_type` for every column - Additional [`constraints`](/reference/resource-properties/constraints), as supported for this materialization and data platform diff --git a/website/docs/reference/resource-configs/database.md b/website/docs/reference/resource-configs/database.md index 6c57e7e2c69..16742b3f597 100644 --- a/website/docs/reference/resource-configs/database.md +++ b/website/docs/reference/resource-configs/database.md @@ -22,6 +22,7 @@ models: ``` + This would result in the generated relation being located in the `reporting` database, so the full relation name would be `reporting.finance.sales_metrics` instead of the default target database. @@ -55,7 +56,7 @@ Available for dbt Cloud release tracks or dbt Core v1.9+. Select v1.9 or newer f -Specify a custom database for a snapshot in your `dbt_project.yml` or config file. +Specify a custom database for a snapshot in your `dbt_project.yml`, snapshot.yml file, or config file. For example, if you have a snapshot that you want to load into a database other than the target database, you can configure it like this: @@ -69,6 +70,20 @@ snapshots: ``` +Or in a `snapshot_name.yml` file: + + + +```yaml +version: 2 + +snapshots: + - name: snapshot_name + [config](/reference/resource-properties/config): + database: snapshots +``` + + This results in the generated relation being located in the `snapshots` database so the full relation name would be `snapshots.finance.your_snapshot` instead of the default target database. diff --git a/website/docs/reference/resource-configs/dbt_valid_to_current.md b/website/docs/reference/resource-configs/dbt_valid_to_current.md index 2a6cf3abe6d..9cf2ca0860e 100644 --- a/website/docs/reference/resource-configs/dbt_valid_to_current.md +++ b/website/docs/reference/resource-configs/dbt_valid_to_current.md @@ -6,7 +6,7 @@ default_value: {NULL} id: "dbt_valid_to_current" --- -Available from dbt v1.9 or with [the dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks) dbt Cloud. + diff --git a/website/docs/reference/resource-configs/delimiter.md b/website/docs/reference/resource-configs/delimiter.md index 5cc5ddaf44b..e1a201ef198 100644 --- a/website/docs/reference/resource-configs/delimiter.md +++ b/website/docs/reference/resource-configs/delimiter.md @@ -4,7 +4,7 @@ datatype: default_value: "," --- -Supported in v1.7 and higher. + ## Definition diff --git a/website/docs/reference/resource-configs/enabled.md b/website/docs/reference/resource-configs/enabled.md index b74d7250907..faee6654b22 100644 --- a/website/docs/reference/resource-configs/enabled.md +++ b/website/docs/reference/resource-configs/enabled.md @@ -78,9 +78,28 @@ snapshots: + + + + +```yaml +version: 2 + +snapshots: + - name: snapshot_name + [config](/reference/resource-properties/config): + enabled: true | false +``` + + + + + ```sql +# Configuring in a SQL file is a legacy method and not recommended. Use the YAML file instead. + {% snapshot [snapshot_name](snapshot_name) %} {{ config( @@ -90,11 +109,10 @@ snapshots: select ... {% endsnapshot %} - ``` - + diff --git a/website/docs/reference/resource-configs/event-time.md b/website/docs/reference/resource-configs/event-time.md index c18c8de6397..e746b7658ba 100644 --- a/website/docs/reference/resource-configs/event-time.md +++ b/website/docs/reference/resource-configs/event-time.md @@ -7,7 +7,7 @@ description: "dbt uses event_time to understand when an event occurred. When def datatype: string --- -Available in [the dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks) and dbt Core v1.9 and higher. + diff --git a/website/docs/reference/resource-configs/group.md b/website/docs/reference/resource-configs/group.md index cd0ad2683f5..5ea701b3b63 100644 --- a/website/docs/reference/resource-configs/group.md +++ b/website/docs/reference/resource-configs/group.md @@ -96,6 +96,21 @@ snapshots: + + + +```yaml +version: 2 + +snapshots: + - name: snapshot_name + [config](/reference/resource-properties/config): + group: GROUP_NAME +``` + + + + ```sql diff --git a/website/docs/reference/resource-configs/hard-deletes.md b/website/docs/reference/resource-configs/hard-deletes.md index 50c8046f4e1..859e4e9e31a 100644 --- a/website/docs/reference/resource-configs/hard-deletes.md +++ b/website/docs/reference/resource-configs/hard-deletes.md @@ -8,8 +8,7 @@ id: "hard-deletes" sidebar_label: "hard_deletes" --- -Available from dbt v1.9 or with [dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks). - + diff --git a/website/docs/reference/resource-configs/lookback.md b/website/docs/reference/resource-configs/lookback.md index 037ffdeb68f..fc832fab3d9 100644 --- a/website/docs/reference/resource-configs/lookback.md +++ b/website/docs/reference/resource-configs/lookback.md @@ -7,8 +7,7 @@ description: "dbt uses `lookback` to detrmine how many 'batches' of `batch_size` datatype: int --- -Available in the [dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks) and dbt Core v1.9 and higher. - + ## Definition Set the `lookback` to an integer greater than or equal to zero. The default value is `1`. You can configure `lookback` for a [model](/docs/build/models) in your `dbt_project.yml` file, property YAML file, or config block. diff --git a/website/docs/reference/resource-configs/persist_docs.md b/website/docs/reference/resource-configs/persist_docs.md index d4a90027771..68a23274b4b 100644 --- a/website/docs/reference/resource-configs/persist_docs.md +++ b/website/docs/reference/resource-configs/persist_docs.md @@ -84,6 +84,23 @@ snapshots: + + + +```yaml +version: 2 + +snapshots: + - name: snapshot_name + [config](/reference/resource-properties/config): + persist_docs: + relation: true + columns: true +``` + + + + ```sql diff --git a/website/docs/reference/resource-configs/schema.md b/website/docs/reference/resource-configs/schema.md index 6f56215de61..1b5a2d83c45 100644 --- a/website/docs/reference/resource-configs/schema.md +++ b/website/docs/reference/resource-configs/schema.md @@ -22,13 +22,14 @@ models: ``` + This would result in the generated relations for these models being located in the `marketing` schema, so the full relation names would be `analytics.target_schema_marketing.model_name`. This is because the schema of the relation is `{{ target.schema }}_{{ schema }}`. The [definition](#definition) section explains this in more detail. -Configure a custom schema in your `dbt_project.yml` file. +Configure a [custom schema](/docs/build/custom-schemas#understanding-custom-schemas) in your `dbt_project.yml` file. For example, if you have a seed that should be placed in a separate schema called `mappings`, you can configure it like this: @@ -50,16 +51,18 @@ This would result in the generated relation being located in the `mappings` sche -Available in dbt Core v1.9+. Select v1.9 or newer from the version dropdown to view the configs. Try it now in the [dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks). +Available in dbt Core v1.9 and higher. Select v1.9 or newer from the version dropdown to view the configs. Try it now in the [dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks). -Specify a custom schema for a snapshot in your `dbt_project.yml` or config file. +Specify a [custom schema](/docs/build/custom-schemas#understanding-custom-schemas) for a snapshot in your `dbt_project.yml` or YAML file. For example, if you have a snapshot that you want to load into a schema other than the target schema, you can configure it like this: +In a `dbt_project.yml` file: + ```yml @@ -70,6 +73,21 @@ snapshots: ``` +In a `snapshots/snapshot_name.yml` file: + + + +```yaml +version: 2 + +snapshots: + - name: snapshot_name + [config](/reference/resource-properties/config): + schema: snapshots +``` + + + This results in the generated relation being located in the `snapshots` schema so the full relation name would be `analytics.snapshots.your_snapshot` instead of the default target schema. @@ -78,20 +96,25 @@ This results in the generated relation being located in the `snapshots` schema s +Specify a [custom schema](/docs/build/custom-schemas#understanding-custom-schemas) for a [saved query](/docs/build/saved-queries#parameters) in your `dbt_project.yml` or YAML file. + ```yml saved-queries: +schema: metrics ``` + +This would result in the saved query being stored in the `metrics` schema. + + -Customize the schema for storing test results in your `dbt_project.yml` file. +Customize a [custom schema](/docs/build/custom-schemas#understanding-custom-schemas) for storing test results in your `dbt_project.yml` file. For example, to save test results in a specific schema, you can configure it like this: - ```yml diff --git a/website/docs/reference/resource-configs/snapshot_meta_column_names.md b/website/docs/reference/resource-configs/snapshot_meta_column_names.md index f1d29ba8bee..24e4c8ca577 100644 --- a/website/docs/reference/resource-configs/snapshot_meta_column_names.md +++ b/website/docs/reference/resource-configs/snapshot_meta_column_names.md @@ -6,7 +6,7 @@ default_value: {"dbt_valid_from": "dbt_valid_from", "dbt_valid_to": "dbt_valid_t id: "snapshot_meta_column_names" --- -Available in dbt Core v1.9+. Select v1.9 or newer from the version dropdown to view the configs. Try it now in the [dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks). + diff --git a/website/docs/reference/resource-configs/snowflake-configs.md b/website/docs/reference/resource-configs/snowflake-configs.md index d576b195b65..9d84e892236 100644 --- a/website/docs/reference/resource-configs/snowflake-configs.md +++ b/website/docs/reference/resource-configs/snowflake-configs.md @@ -38,11 +38,11 @@ flags: The following configurations are supported. For more information, check out the Snowflake reference for [`CREATE ICEBERG TABLE` (Snowflake as the catalog)](https://docs.snowflake.com/en/sql-reference/sql/create-iceberg-table-snowflake). -| Field | Type | Required | Description | Sample input | Note | +| Parameter | Type | Required | Description | Sample input | Note | | ------ | ----- | -------- | ------------- | ------------ | ------ | -| Table Format | String | Yes | Configures the objects table format. | `iceberg` | `iceberg` is the only accepted value. | -| External volume | String | Yes(*) | Specifies the identifier (name) of the external volume where Snowflake writes the Iceberg table's metadata and data files. | `my_s3_bucket` | *You don't need to specify this if the account, database, or schema already has an associated external volume. [More info](https://docs.snowflake.com/en/sql-reference/sql/create-iceberg-table-snowflake#:~:text=Snowflake%20Table%20Structures.-,external_volume) | -| Base location Subpath | String | No | An optional suffix to add to the `base_location` path that dbt automatically specifies. | `jaffle_marketing_folder` | We recommend that you do not specify this. Modifying this parameter results in a new Iceberg table. See [Base Location](#base-location) for more info. | +| `table_format` | String | Yes | Configures the objects table format. | `iceberg` | `iceberg` is the only accepted value. | +| `external_volume` | String | Yes(*) | Specifies the identifier (name) of the external volume where Snowflake writes the Iceberg table's metadata and data files. | `my_s3_bucket` | *You don't need to specify this if the account, database, or schema already has an associated external volume. [More info](https://docs.snowflake.com/user-guide/tables-iceberg-configure-external-volume#set-a-default-external-volume-at-the-account-database-or-schema-level) | +| `base_location_subpath` | String | No | An optional suffix to add to the `base_location` path that dbt automatically specifies. | `jaffle_marketing_folder` | We recommend that you do not specify this. Modifying this parameter results in a new Iceberg table. See [Base Location](#base-location) for more info. | ### Example configuration diff --git a/website/docs/reference/resource-properties/concurrent_batches.md b/website/docs/reference/resource-properties/concurrent_batches.md index 4d6b2ea0af4..eef795344c3 100644 --- a/website/docs/reference/resource-properties/concurrent_batches.md +++ b/website/docs/reference/resource-properties/concurrent_batches.md @@ -5,11 +5,7 @@ datatype: model_name description: "Learn about concurrent_batches in dbt." --- -:::note - -Available in dbt Core v1.9+ or the [dbt Cloud "Latest" release tracks](/docs/dbt-versions/cloud-release-tracks). - -::: + diff --git a/website/docs/reference/resource-properties/unit-tests.md b/website/docs/reference/resource-properties/unit-tests.md index 7bc177a133c..46243f8c1ef 100644 --- a/website/docs/reference/resource-properties/unit-tests.md +++ b/website/docs/reference/resource-properties/unit-tests.md @@ -5,11 +5,8 @@ resource_types: [models] datatype: test --- -:::note + -This functionality is available in dbt Core v1.8+ and [dbt Cloud release tracks](/docs/dbt-versions/cloud-release-tracks). - -::: Unit tests validate your SQL modeling logic on a small set of static inputs before you materialize your full model in production. They support a test-driven development approach, improving both the efficiency of developers and reliability of code. diff --git a/website/docs/reference/resource-properties/versions.md b/website/docs/reference/resource-properties/versions.md index 748aa477a4f..d2cb4a1f116 100644 --- a/website/docs/reference/resource-properties/versions.md +++ b/website/docs/reference/resource-properties/versions.md @@ -5,7 +5,7 @@ required: no keyword: governance, model version, model versioning, dbt model versioning --- -import VersionsCallout from '/snippets/_version-callout.md'; +import VersionsCallout from '/snippets/_model-version-callout.md'; diff --git a/website/snippets/_version-callout.md b/website/snippets/_model-version-callout.md similarity index 100% rename from website/snippets/_version-callout.md rename to website/snippets/_model-version-callout.md diff --git a/website/src/components/versionCallout/index.js b/website/src/components/versionCallout/index.js new file mode 100644 index 00000000000..975d6b6f1b9 --- /dev/null +++ b/website/src/components/versionCallout/index.js @@ -0,0 +1,23 @@ +import React from 'react'; +import Admonition from '@theme/Admonition'; + +const VersionCallout = ({ version }) => { + if (!version) { + return null; + } + + return ( +
+ + + Available from dbt v{version} or with the{' '} + + dbt Cloud "Latest" release track + {''}. + + +
+); +}; + +export default VersionCallout; diff --git a/website/src/theme/MDXComponents/index.js b/website/src/theme/MDXComponents/index.js index 422d6c99fab..c0a15e6c5b6 100644 --- a/website/src/theme/MDXComponents/index.js +++ b/website/src/theme/MDXComponents/index.js @@ -45,6 +45,7 @@ import Lifecycle from '@site/src/components/lifeCycle'; import DetailsToggle from '@site/src/components/detailsToggle'; import Expandable from '@site/src/components/expandable'; import ConfettiTrigger from '@site/src/components/confetti/'; +import VersionCallout from '@site/src/components/versionCallout'; const MDXComponents = { Head, @@ -97,5 +98,6 @@ const MDXComponents = { Expandable: Expandable, ConfettiTrigger: ConfettiTrigger, SortableTable: SortableTable, + VersionCallout: VersionCallout, }; export default MDXComponents;