diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index c9b25d3b71c..309872dd818 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -12,7 +12,8 @@ Uncomment if you're publishing docs for a prerelease version of dbt (delete if n - [ ] Add versioning components, as described in [Versioning Docs](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#versioning-entire-pages) - [ ] Add a note to the prerelease version [Migration Guide](https://github.com/dbt-labs/docs.getdbt.com/tree/current/website/docs/docs/dbt-versions/core-upgrade) --> -- [ ] Review the [Content style guide](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/content-style-guide.md) and [About versioning](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#adding-a-new-version) so my content adheres to these guidelines. +- [ ] Review the [Content style guide](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/content-style-guide.md) so my content adheres to these guidelines. +- [ ] For [docs versioning](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#about-versioning), review how to [version a whole page](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#adding-a-new-version) and [version a block of content](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#versioning-blocks-of-content). - [ ] Add a checklist item for anything that needs to happen before this PR is merged, such as "needs technical review" or "change base branch." Adding new pages (delete if not applicable): @@ -22,4 +23,4 @@ Adding new pages (delete if not applicable): Removing or renaming existing pages (delete if not applicable): - [ ] Remove page from `website/sidebars.js` - [ ] Add an entry `website/static/_redirects` -- [ ] [Ran link testing](https://github.com/dbt-labs/docs.getdbt.com#running-the-cypress-tests-locally) to update the links that point to the deleted page +- [ ] Run link testing locally with `npm run build` to update the links that point to the deleted page diff --git a/.github/workflows/asana-connection.yml b/.github/workflows/asana-connection.yml new file mode 100644 index 00000000000..aced477bdac --- /dev/null +++ b/.github/workflows/asana-connection.yml @@ -0,0 +1,17 @@ +name: Show PR Status in Asana +on: + pull_request: + types: [opened, reopened] + +jobs: + create-asana-attachment-job: + runs-on: ubuntu-latest + name: Create pull request attachments on Asana tasks + steps: + - name: Create pull request attachments + uses: Asana/create-app-attachment-github-action@latest + id: postAttachment + with: + asana-secret: ${{ secrets.ASANA_SECRET }} + - name: Log output status + run: echo "Status is ${{ steps.postAttachment.outputs.status }}" diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml index f49c3f1317b..5e91cd899ab 100644 --- a/.github/workflows/lint.yml +++ b/.github/workflows/lint.yml @@ -12,8 +12,16 @@ jobs: uses: actions/setup-node@v3 with: node-version: '18.12.0' + + - name: Cache Node Modules + uses: actions/cache@v3 + id: cache-node-mods + with: + path: website/node_modules + key: node-modules-cache-v3-${{ hashFiles('**/package.json', '**/package-lock.json') }} - name: Install Packages + if: steps.cache-node-mods.outputs.cache-hit != 'true' run: cd website && npm ci - name: Run ESLint diff --git a/package-lock.json b/package-lock.json deleted file mode 100644 index 058db9205e2..00000000000 --- a/package-lock.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "name": "docs.getdbt.com", - "lockfileVersion": 2, - "requires": true, - "packages": {} -} diff --git a/website/blog/2021-11-29-open-source-community-growth.md b/website/blog/2021-11-29-open-source-community-growth.md index 8a71a504875..98b64cefa3d 100644 --- a/website/blog/2021-11-29-open-source-community-growth.md +++ b/website/blog/2021-11-29-open-source-community-growth.md @@ -57,7 +57,7 @@ For starters, I want to know how much conversation is occurring across the vario There are a ton of metrics that can be tracked in any GitHub project — committers, pull requests, forks, releases — but I started pretty simple. For each of the projects we participate in, I just want to know how the number of GitHub stars grows over time, and whether the growth is accelerating or flattening out. This has become a key performance indicator for open source communities, for better or for worse, and keeping track of it isn't optional. -Finally, I want to know how much Marquez and OpenLineage are being used. It used to be that when you wanted to consume a bit of tech, you'd download a file. Folks like me who study user behavior would track download counts as if they were stock prices. This is no longer the case; today, our tech is increasingly distributed through package managers and image repositories. Docker Hub and PyPI metrics have therefore become good indicators of consumption. Docker image pulls and runs of `pip install` are the modern day download and, as noisy as these metrics are, they indicate a similar level of user commitment. +Finally, I want to know how much Marquez and OpenLineage are being used. It used to be that when you wanted to consume a bit of tech, you'd download a file. Folks like me who study user behavior would track download counts as if they were stock prices. This is no longer the case; today, our tech is increasingly distributed through package managers and image repositories. Docker Hub and PyPI metrics have therefore become good indicators of consumption. Docker image pulls and runs of `python -m pip install` are the modern day download and, as noisy as these metrics are, they indicate a similar level of user commitment. To summarize, here are the metrics I decided to track (for now, anyway): - Slack messages (by user/ by community) diff --git a/website/blog/2022-04-14-add-ci-cd-to-bitbucket.md b/website/blog/2022-04-14-add-ci-cd-to-bitbucket.md index 451013b1572..44346e93741 100644 --- a/website/blog/2022-04-14-add-ci-cd-to-bitbucket.md +++ b/website/blog/2022-04-14-add-ci-cd-to-bitbucket.md @@ -159,7 +159,7 @@ pipelines: artifacts: # Save the dbt run artifacts for the next step (upload) - target/*.json script: - - pip install -r requirements.txt + - python -m pip install -r requirements.txt - mkdir ~/.dbt - cp .ci/profiles.yml ~/.dbt/profiles.yml - dbt deps @@ -208,7 +208,7 @@ pipelines: # Set up dbt environment + dbt packages. Rather than passing # profiles.yml to dbt commands explicitly, we'll store it where dbt # expects it: - - pip install -r requirements.txt + - python -m pip install -r requirements.txt - mkdir ~/.dbt - cp .ci/profiles.yml ~/.dbt/profiles.yml - dbt deps diff --git a/website/blog/2022-05-03-making-dbt-cloud-api-calls-using-dbt-cloud-cli.md b/website/blog/2022-05-03-making-dbt-cloud-api-calls-using-dbt-cloud-cli.md index 2ee774d4f1d..6758a28638c 100644 --- a/website/blog/2022-05-03-making-dbt-cloud-api-calls-using-dbt-cloud-cli.md +++ b/website/blog/2022-05-03-making-dbt-cloud-api-calls-using-dbt-cloud-cli.md @@ -59,7 +59,7 @@ You probably agree that the latter example is definitely more elegant and easier In addition to CLI commands that interact with a single dbt Cloud API endpoint there are composite helper commands that call one or more API endpoints and perform more complex operations. One example of composite commands are `dbt-cloud job export` and `dbt-cloud job import` where, under the hood, the export command performs a `dbt-cloud job get` and writes the job metadata to a file and the import command reads job parameters from a JSON file and calls `dbt-cloud job create`. The export and import commands can be used in tandem to move dbt Cloud jobs between projects. Another example is the `dbt-cloud job delete-all` which fetches a list of all jobs using `dbt-cloud job list` and then iterates over the list prompting the user if they want to delete the job. For each job that the user agrees to delete a `dbt-cloud job delete` is performed. -To install the CLI in your Python environment run `pip install dbt-cloud-cli` and you’re all set. You can use it locally in your development environment or e.g. in a GitHub actions workflow. +To install the CLI in your Python environment run `python -m pip install dbt-cloud-cli` and you’re all set. You can use it locally in your development environment or e.g. in a GitHub actions workflow. ## How the project came to be @@ -310,7 +310,7 @@ The `CatalogExploreCommand.execute` method implements the interactive exploratio I’ve included the app in the latest version of dbt-cloud-cli so you can test it out yourself! To use the app you need install dbt-cloud-cli with extra dependencies: ```bash -pip install dbt-cloud-cli[demo] +python -m pip install dbt-cloud-cli[demo] ``` Now you can the run app: diff --git a/website/blog/2023-04-18-building-a-kimball-dimensional-model-with-dbt.md b/website/blog/2023-04-18-building-a-kimball-dimensional-model-with-dbt.md index 3ca1f6ac2a9..ab364749eff 100644 --- a/website/blog/2023-04-18-building-a-kimball-dimensional-model-with-dbt.md +++ b/website/blog/2023-04-18-building-a-kimball-dimensional-model-with-dbt.md @@ -79,12 +79,12 @@ Depending on which database you’ve chosen, install the relevant database adapt ```text # install adaptor for duckdb -pip install dbt-duckdb +python -m pip install dbt-duckdb # OR # install adaptor for postgresql -pip install dbt-postgres +python -m pip install dbt-postgres ``` ### Step 4: Setup dbt profile diff --git a/website/blog/2023-11-14-specify-prod-environment.md b/website/blog/2023-11-14-specify-prod-environment.md new file mode 100644 index 00000000000..c6ad2b31027 --- /dev/null +++ b/website/blog/2023-11-14-specify-prod-environment.md @@ -0,0 +1,73 @@ +--- + +title: Why you should specify a production environment in dbt Cloud +description: "The bottom line: You should split your Environments in dbt Cloud based on their purposes (e.g. Production and Staging/CI) and mark one environment as Production. This will improve your CI experience and enable you to use dbt Explorer." +slug: specify-prod-environment + +authors: [joel_labes] + +tags: [dbt Cloud] +hide_table_of_contents: false + +date: 2023-11-14 +is_featured: false + +--- + +:::tip The Bottom Line: +You should [split your Jobs](#how) across Environments in dbt Cloud based on their purposes (e.g. Production and Staging/CI) and set one environment as Production. This will improve your CI experience and enable you to use dbt Explorer. +::: + +[Environmental segmentation](/docs/environments-in-dbt) has always been an important part of the analytics engineering workflow: + +- When developing new models you can [process a smaller subset of your data](/reference/dbt-jinja-functions/target#use-targetname-to-limit-data-in-dev) by using `target.name` or an environment variable. +- By building your production-grade models into [a different schema and database](https://docs.getdbt.com/docs/build/custom-schemas#managing-environments), you can experiment in peace without being worried that your changes will accidentally impact downstream users. +- Using dedicated credentials for production runs, instead of an analytics engineer's individual dev credentials, ensures that things don't break when that long-tenured employee finally hangs up their IDE. + +Historically, dbt Cloud required a separate environment for _Development_, but was otherwise unopinionated in how you configured your account. This mostly just worked – as long as you didn't have anything more complex than a CI job mixed in with a couple of production jobs – because important constructs like deferral in CI and documentation were only ever tied to a single job. + +But as companies' dbt deployments have grown more complex, it doesn't make sense to assume that a single job is enough anymore. We need to exchange a job-oriented strategy for a more mature and scalable environment-centric view of the world. To support this, a recent change in dbt Cloud enables project administrators to [mark one of their environments as the Production environment](/docs/deploy/deploy-environments#set-as-production-environment-beta), just as has long been possible for the Development environment. + +Explicitly separating your Production workloads lets dbt Cloud be smarter with the metadata it creates, and is particularly important for two new features: dbt Explorer and the revised CI workflows. + + + +## Make sure dbt Explorer always has the freshest information available + +**The old way**: Your dbt docs site was based on a single job's run. + +**The new way**: dbt Explorer uses metadata from across every invocation in a defined Production environment to build the richest and most up-to-date understanding of your project. + +Because dbt docs could only be updated by a single predetermined job, users who needed their documentation to immediately reflect changes deployed throughout the day (regardless of which job executed them) would find themselves forced to run a dedicated job which did nothing other than run `dbt docs generate` on a regular schedule. + +The Discovery API that powers dbt Explorer ingests all metadata generated by any dbt invocation, which means that it can always be up to date with the applied state of your project. However it doesn't make sense for dbt Explorer to show docs based on a PR that hasn't been merged yet. + +To avoid this conflation, you need to mark an environment as the Production environment. All runs completed in _that_ environment will contribute to dbt Explorer's, while others will be excluded. (Future versions of Explorer will support environment selection, so that you can preview your documentation changes as well.) + +## Run Slimmer CI than ever with environment-level deferral + +**The old way**: [Slim CI](/guides/set-up-ci?step=2) deferred to a single job, and would only detect changes as of that job's last build time. + +**The new way**: Changes are detected regardless of the job they were deployed in, removing false positives and overbuilding of models in CI. + +Just like dbt docs, relying on a single job to define your state for comparison purposes leads to a choice between unnecessarily rebuilding models which were deployed by another job, or creating a dedicated job that runs `dbt compile` on repeat to keep on top of all changes. + +By using the environment as the arbiter of state, any time a change is made to your Production deployment it will immediately be taken into consideration by subsequent Slim CI runs. + +## The easiest way to break apart your jobs {#how} + + + +For most projects, changing from a job-centric to environment-centric approach to metadata is straightforward and immediately pays dividends as described above. Assuming that your Staging/CI and Production jobs are currently intermingled, you can extricate them as follows: + +1. Create a new dbt Cloud environment called Staging +2. For each job that belongs to the Staging environment, edit the job and update its environment +3. Tick the ["Mark as Production environment" box](/docs/deploy/deploy-environments#set-as-production-environment-beta) in your original environment's settings + +## Conclusion + +Until very recently, I only thought of Environments in dbt Cloud as a way to use different authentication credentials in different contexts. And until very recently, I was mostly right. + +Not anymore. The metadata dbt creates is critical for effective data teams – whether you're concerned about cost savings, discoverability, increased development speed or reliable results across your organization – but is only fully effective if it's segmented by the environment that created it. + +Take a few minutes to clean up your environments - it'll make all the difference. diff --git a/website/blog/categories.yml b/website/blog/categories.yml index 8103f58cc33..45acf246dff 100644 --- a/website/blog/categories.yml +++ b/website/blog/categories.yml @@ -19,3 +19,5 @@ display_title: SQL magic description: Stories of dbt developers making SQL sing across warehouses. is_featured: true +- name: dbt Cloud + description: Using dbt Cloud to build for scale \ No newline at end of file diff --git a/website/docs/best-practices/best-practice-workflows.md b/website/docs/best-practices/best-practice-workflows.md index f06e785c6db..9b79c244901 100644 --- a/website/docs/best-practices/best-practice-workflows.md +++ b/website/docs/best-practices/best-practice-workflows.md @@ -24,7 +24,7 @@ SQL styles, field naming conventions, and other rules for your dbt project shoul :::info Our style guide -We've made our [style guide](https://github.com/dbt-labs/corp/blob/main/dbt_style_guide.md) public – these can act as a good starting point for your own style guide. +We've made our [style guide](/best-practices/how-we-style/0-how-we-style-our-dbt-projects) public – these can act as a good starting point for your own style guide. ::: diff --git a/website/docs/best-practices/clone-incremental-models.md b/website/docs/best-practices/clone-incremental-models.md new file mode 100644 index 00000000000..4096af489ab --- /dev/null +++ b/website/docs/best-practices/clone-incremental-models.md @@ -0,0 +1,79 @@ +--- +title: "Clone incremental models as the first step of your CI job" +id: "clone-incremental-models" +description: Learn how to define clone incremental models as the first step of your CI job. +displayText: Clone incremental models as the first step of your CI job +hoverSnippet: Learn how to clone incremental models for CI jobs. +--- + +Before you begin, you must be aware of a few conditions: +- `dbt clone` is only available with dbt version 1.6 and newer. Refer to our [upgrade guide](/docs/dbt-versions/upgrade-core-in-cloud) for help enabling newer versions in dbt Cloud +- This strategy only works for warehouse that support zero copy cloning (otherwise `dbt clone` will just create pointer views). +- Some teams may want to test that their incremental models run in both incremental mode and full-refresh mode. + +Imagine you've created a [Slim CI job](/docs/deploy/continuous-integration) in dbt Cloud and it is configured to: + +- Defer to your production environment. +- Run the command `dbt build --select state:modified+` to run and test all of the models you've modified and their downstream dependencies. +- Trigger whenever a developer on your team opens a PR against the main branch. + + + +Now imagine your dbt project looks something like this in the DAG: + + + +When you open a pull request (PR) that modifies `dim_wizards`, your CI job will kickoff and build _only the modified models and their downstream dependencies_ (in this case, `dim_wizards` and `fct_orders`) into a temporary schema that's unique to your PR. + +This build mimics the behavior of what will happen once the PR is merged into the main branch. It ensures you're not introducing breaking changes, without needing to build your entire dbt project. + +## What happens when one of the modified models (or one of their downstream dependencies) is an incremental model? + +Because your CI job is building modified models into a PR-specific schema, on the first execution of `dbt build --select state:modified+`, the modified incremental model will be built in its entirety _because it does not yet exist in the PR-specific schema_ and [is_incremental will be false](/docs/build/incremental-models#understanding-the-is_incremental-macro). You're running in `full-refresh` mode. + +This can be suboptimal because: +- Typically incremental models are your largest datasets, so they take a long time to build in their entirety which can slow down development time and incur high warehouse costs. +- There are situations where a `full-refresh` of the incremental model passes successfully in your CI job but an _incremental_ build of that same table in prod would fail when the PR is merged into main (think schema drift where [on_schema_change](/docs/build/incremental-models#what-if-the-columns-of-my-incremental-model-change) config is set to `fail`) + +You can alleviate these problems by zero copy cloning the relevant, pre-exisitng incremental models into your PR-specific schema as the first step of the CI job using the `dbt clone` command. This way, the incremental models already exist in the PR-specific schema when you first execute the command `dbt build --select state:modified+` so the `is_incremental` flag will be `true`. + +You'll have two commands for your dbt Cloud CI check to execute: +1. Clone all of the pre-existing incremental models that have been modified or are downstream of another model that has been modified: `dbt clone --select state:modified+,config.materialized:incremental,state:old` +2. Build all of the models that have been modified and their downstream dependencies: `dbt build --select state:modified+` + +Because of your first clone step, the incremental models selected in your `dbt build` on the second step will run in incremental mode. + + + +Your CI jobs will run faster, and you're more accurately mimicking the behavior of what will happen once the PR has been merged into main. + +### Expansion on "think schema drift" where [on_schema_change](/docs/build/incremental-models#what-if-the-columns-of-my-incremental-model-change) config is set to `fail`" from above + +Imagine you have an incremental model `my_incremental_model` with the following config: + +```sql + +{{ + config( + materialized='incremental', + unique_key='unique_id', + on_schema_change='fail' + ) +}} + +``` + +Now, let’s say you open up a PR that adds a new column to `my_incremental_model`. In this case: +- An incremental build will fail. +- A `full-refresh` will succeed. + +If you have a daily production job that just executes `dbt build` without a `--full-refresh` flag, once the PR is merged into main and the job kicks off, you will get a failure. So the question is - what do you want to happen in CI? +- Do you want to also get a failure in CI, so that you know that once this PR is merged into main you need to immediately execute a `dbt build --full-refresh --select my_incremental_model` in production in order to avoid a failure in prod? This will block your CI check from passing. +- Do you want your CI check to succeed, because once you do run a `full-refresh` for this model in prod you will be in a successful state? This may lead unpleasant surprises if your production job is suddenly failing when you merge this PR into main if you don’t remember you need to execute a `dbt build --full-refresh --select my_incremental_model` in production. + +There’s probably no perfect solution here; it’s all just tradeoffs! Our preference would be to have the failing CI job and have to manually override the blocking branch protection rule so that there are no surprises and we can proactively run the appropriate command in production once the PR is merged. + +### Expansion on "why `state:old`" + +For brand new incremental models, you want them to run in `full-refresh` mode in CI, because they will run in `full-refresh` mode in production when the PR is merged into `main`. They also don't exist yet in the production environment... they're brand new! +If you don't specify this, you won't get an error just a “No relation found in state manifest for…”. So, it technically works without specifying `state:old` but adding `state:old` is more explicit and means it won't even try to clone the brand new incremental models. diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md index ffbd78b939c..6e9153a3780 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md @@ -23,8 +23,8 @@ We'll use pip to install MetricFlow and our dbt adapter: python -m venv [virtual environment name] source [virtual environment name]/bin/activate # install dbt and MetricFlow -pip install "dbt-metricflow[adapter name]" -# e.g. pip install "dbt-metricflow[snowflake]" +python -m pip install "dbt-metricflow[adapter name]" +# e.g. python -m pip install "dbt-metricflow[snowflake]" ``` Lastly, to get to the pre-Semantic Layer starting state, checkout the `start-here` branch. diff --git a/website/docs/community/resources/getting-help.md b/website/docs/community/resources/getting-help.md index 2f30644186e..19b7c22fbdf 100644 --- a/website/docs/community/resources/getting-help.md +++ b/website/docs/community/resources/getting-help.md @@ -60,4 +60,4 @@ If you want to receive dbt training, check out our [dbt Learn](https://learn.get - Billing - Bug reports related to the web interface -As a rule of thumb, if you are using dbt Cloud, but your problem is related to code within your dbt project, then please follow the above process rather than reaching out to support. +As a rule of thumb, if you are using dbt Cloud, but your problem is related to code within your dbt project, then please follow the above process rather than reaching out to support. Refer to [dbt Cloud support](/docs/dbt-support) for more information. diff --git a/website/docs/community/spotlight/alison-stanton.md b/website/docs/community/spotlight/alison-stanton.md new file mode 100644 index 00000000000..2054f78b0b7 --- /dev/null +++ b/website/docs/community/spotlight/alison-stanton.md @@ -0,0 +1,92 @@ +--- +id: alison-stanton +title: Alison Stanton +description: | + I started programming 20+ years ago. I moved from web applications into transforming data and business intelligence reporting because it's both hard and useful. The majority of my career has been in engineering for SaaS companies. For my last few positions I've been brought in to transition larger, older companies to a modern data platform and ways of thinking. + + I am dbt Certified. I attend Coalesce and other dbt events virtually. I speak up in dbt Slack and on the dbt-core, dbt-redshift, and dbt-sqlserver repositories. dbt Slack is my happy place, especially #advice-for-dbt-power-users. I care a lot about the dbt documentation and dbt doc. +image: /img/community/spotlight/alison.jpg +pronouns: she/her +location: Chicago, IL, USA +jobTitle: AVP, Analytics Engineering Lead +organization: Advocates for SOGIE Data Collection +socialLinks: + - name: LinkedIn + link: https://www.linkedin.com/in/alisonstanton/ + - name: Github + link: https://github.com/alison985/ +dateCreated: 2023-11-07 +hide_table_of_contents: true +--- + +## When did you join the dbt community and in what way has it impacted your career? + +I joined the dbt community when I joined an employer in mid-2020. To summarize the important things that dbt has given me: it allowed me to focus on the next set of data challenges instead of staying in toil. Data folks joke that we're plumbers, but we're digital plumbers and that distinction should enable us to be DRY. That means not only writing DRY code like dbt allows, but also having tooling automation to DRY up repetitive tasks like dbt provides. + +dbt's existence flipped the experience of data testing on its head for me. I went from a)years of instigating tech discussions on how to systematize data quality checks and b) building my own SQL tests and design patterns, to having built-in mechanisms for data testing. + +dbt and the dbt community materials are assets I can use in order to provide validation for things I have, do, and will say about data. Having outside voices to point to when requesting investment in data up-front - to avoid problems later - is an under-appreciated tool for data leader's toolboxes. + +dbt's community has given me access to both a) high-quality, seasoned SMEs in my field to learn from and b) newer folks I can help. Both are gifts that I cherish. + +## What dbt community leader do you identify with? How are you looking to grow your leadership in the dbt community? + +I want to be when I grow up: + +- MJ, who was the first person to ever say "data build tool" to me. If I'd listened to her then I could have been part of the dbt community years sooner. + +- Christine Dixon who presented "Could You Defend Your Data in Court?" at Coalesce 2023. In your entire data career, that is the most important piece of education you'll get. + +- The dbt community team in general. Hands-down the most important work they do is the dbt Slack community, which gives me and others the accessibility we need to participate. Gwen Windflower (Winnie) for her extraordinary ability to bridge technical nuance with business needs on-the-fly. Dave Connors for being the first voice for "a node is a node is a node". Joel Labes for creating the ability to emoji-react with :sparkles: to post to the #best-of-slack channel. And so on. The decision to foster a space for data instead of just for their product because that enhances their product. The extremely impressive ability to maintain a problem-solving-is-cool, participate-as-you-can, chorus-of-voices, international, not-only-cis-men, and we're-all-in-this-together community. + +- Other (all?) dbt labs employees who engage with the community, instead of having a false separation with it — like most software companies. Welcoming feedback, listening to it, and actioning or filtering it out (ex. Mirna Wong, account reps). Thinking holistically about the eco-system, not just one feature at a time (ex. Anders). Responsiveness and ability to translate diverse items into technical clarity and focused actions (ex. Doug Beatty, the dbt support team). I've been in software and open source and online communities for a long time - these are rare things we should not take for granted. + +- Josh Devlin for prolificness that demonstrates expertise and dedication to helping. + +- The maintainers of dbt packages like dbt-utils, dbt-expectations, dbt-date, etc. + +- Everyone who gets over their fear to ask a question, propose an answer that may not work, or otherwise take a risk by sharing their voice. + +I hope I can support my employer my professional development and my dbt community through the following: + +- Elevate dbt understanding of and support for Enterprise-size company use cases through dialogue, requests, and examples. +- Emphasize rigor with defensive coding and comprehensive testing practices. +- Improve the onboarding and up-skilling of dbt engineers through feedback and edits on docs.getdbt.com. +- Contribute to the maintenance of a collaborative and helpful dbt community as the number of dbt practitioners reaches various growth stages and tipping points. +- Engage in dialogue. Providing feedback. Champion developer experience as a priority. Be a good open-source citizen on GitHub. + +## What have you learned from community members? What do you hope others can learn from you? + +I have learned: + +- Details on DAG sequencing. +- How to make an engineering proposal a community conversation. +- The dbt semantic layer + +So many things that are now so engrained in me that I can't remember not knowing them. + +I can teach and share about: + +- Naming new concepts and how to choose those names. +- Reproducibility, reconciliation, and audits. +- Data ethics. +- Demographic questions for sexual orientation and/or gender identity on a form. I'm happy to be your shortcut to the most complicated data and most engrained tech debt in history. + +I also geek out talking about: + +- reusing functionality in creative ways, +- balancing trade-offs in data schema modeling, +- dealing with all of an organization's data holistically, +- tracking instrumentation, and +- the philosophy on prioritization. + +The next things on my agenda to learn about: + +- Successes and failures in data literacy work. The best I've found so far is 1:1 interactions and that doesn't scale. +- How to reduce the amount of time running dbt test takes while maintaining coverage. +Data ethics. +The things you think are most important by giving them a :sparkles: emoji reaction in Slack. + +## Anything else interesting you want to tell us? + +My gratitude to each community member for this community. diff --git a/website/docs/community/spotlight/bruno-de-lima.md b/website/docs/community/spotlight/bruno-de-lima.md index 7f40f66859c..0365ee6c6a8 100644 --- a/website/docs/community/spotlight/bruno-de-lima.md +++ b/website/docs/community/spotlight/bruno-de-lima.md @@ -2,11 +2,11 @@ id: bruno-de-lima title: Bruno de Lima description: | - I am an Analytics Engineer and aspiring tech writer coming from an academic engineering background. + Hi all! I'm a Data Engineer, deeply fascinated by the awesomeness dbt. I love talking about dbt, creating content from daily tips to blogposts and engaging with this vibrant community! - I worked at Indicium as an Analytics Engineer for more than a year, having worked with dbt (of course, every day) for transformation; BigQuery, Snowflake, and Databricks as data warehouses; Power BI and Tableau for BI; and Airflow for orchestration. + Started my career at the beginning of 2022 at Indicium as an Analytics Engineer, working with dbt from day 1. By 2023, my path took a global trajectory as I joined phData as a Data Engineer, expanding my experiences and forging connections beyond Brazil. While dbt is at the heart of my expertise, I've also delved into data warehouses such as Snowflake, Databricks, and BigQuery; visualization tools like Power BI and Tableau; and several minor modern data stack tools. - I actively participate in the dbt community, having attended two dbt meetups in Brazil organized by Indicium; writing about dbt-related topics in my Medium and LinkedIn profiles; contributing to the code; and frequently checking dbt Slack and Discourse, helping (and being helped by) other dbt practitioners. If you are a community member, you may have seen me around! + I actively participate in the dbt community, having attended two dbt Meetups in Brazil organized by Indicium; writing about dbt-related topics in my Medium and LinkedIn profiles; contributing to the code; and frequently checking dbt Slack and Discourse, helping (and being helped by) other dbt practitioners. If you are a community member, you may have seen me around! image: /img/community/spotlight/bruno-de-lima.jpg pronouns: he/him location: Florianópolis, Brazil @@ -18,7 +18,7 @@ socialLinks: link: https://www.linkedin.com/in/brunoszdl/ - name: Medium link: https://medium.com/@bruno.szdl -dateCreated: 2023-03-28 +dateCreated: 2023-11-05 hide_table_of_contents: true --- @@ -30,7 +30,7 @@ It took me some time to become an active member of the dbt community. I started Inspired by other members, especially Josh Devlin and Owen Prough, I began answering questions on Slack and Discourse. For questions I couldn't answer, I would try engaging in discussions about possible solutions or provide useful links. I also started posting dbt tips on LinkedIn to help practitioners learn about new features or to refresh their memories about existing ones. -By being more involved in the community, I felt more connected and supported. I received help from other members, and now, I could help others, too. I was happy with this arrangement, but more unexpected surprises came my way. My active participation in Slack, discourse, and LinkedIn opened doors to new connections and career opportunities. I had the pleasure of meeting a lot of incredible people and receiving exciting job offers. +By being more involved in the community, I felt more connected and supported. I received help from other members, and now, I could help others, too. I was happy with this arrangement, but more unexpected surprises came my way. My active participation in Slack, Discourse, and LinkedIn opened doors to new connections and career opportunities. I had the pleasure of meeting a lot of incredible people and receiving exciting job offers, including the one for working at phData. Thanks to the dbt community, I went from feeling uncertain about my career prospects to having a solid career and being surrounded by incredible people. diff --git a/website/docs/community/spotlight/dakota-kelley.md b/website/docs/community/spotlight/dakota-kelley.md new file mode 100644 index 00000000000..57834d9cdff --- /dev/null +++ b/website/docs/community/spotlight/dakota-kelley.md @@ -0,0 +1,30 @@ +--- +id: dakota-kelley +title: Dakota Kelley +description: | + For the last ~2 years I've worked at phData. Before that I spent 8 years working as a Software Developer in the public sector. Currently I'm a Solution Architect, helping our customers and clients implement dbt on Snowflake, working across multiple cloud providers. + + I first started reading about dbt when I was in grad school about 3 years ago. When I began with phData I had a fantastic opportunity to work with dbt. From there I feel in love with the Engineering practices and structure that I always felt were missing from Data Work. Since then, I've been fortunate enough to speak at Coalesce 2022 and at Coalesce 2023. On top of this, I've written numerous blogs about dbt as well. +image: /img/community/spotlight/dakota.jpg +pronouns: he/him +location: Edmond, USA +jobTitle: Solution Architect +companyName: phData +socialLinks: + - name: LinkedIn + link: https://www.linkedin.com/in/dakota-kelley/ +dateCreated: 2023-11-08 +hide_table_of_contents: true +--- + +## When did you join the dbt community and in what way has it impacted your career? + +I joined the dbt Community not too long after my first working experience. One of my passions is giving back and helping others, and being a part of the community allows me to help others with problems I've tackled before. Along the way it helps me learn new ways and see different methods to solve a wide variety of problems. Every time I interact with the community I've learned something new and that energizes me. + +## What dbt community leader do you identify with? How are you looking to grow your leadership in the dbt community? + +This is a tough one. I know there are several, but the main qualities I resonate with are from those who dig in and help each other. There are always nuances to others situations, and it's good to dig in together, understand those, and seek a solution. The other quality I look for is someone who is trying to pull others up with them. At the end of the day we should all be striving to make all things better than they were when we arrived, regardless if that's the dbt Community or the local park we visit for rest and relaxation. + +## What have you learned from community members? What do you hope others can learn from you? + +The thing I hope others take away from me, is to genuinely support others and tackle problems with curiosity. There used to be a time where I was always worried about being wrong, so I wouldn't get too involved. It's okay to be wrong, that's how we learn new ways to handle problems and find new ways to grow. We just all have to be open to learning and trying our best to help and support each other. diff --git a/website/docs/community/spotlight/fabiyi-opeyemi.md b/website/docs/community/spotlight/fabiyi-opeyemi.md index f26ee27910b..f67ff4aaefc 100644 --- a/website/docs/community/spotlight/fabiyi-opeyemi.md +++ b/website/docs/community/spotlight/fabiyi-opeyemi.md @@ -2,9 +2,9 @@ id: fabiyi-opeyemi title: Opeyemi Fabiyi description: | - I'm an Analytics Engineer with Data Culture, a Data Consulting firm where I use dbt regularly to help clients build quality-tested data assets. I've also got a background in financial services and supply chain. I'm passionate about helping organizations to become data-driven and I majorly use dbt for data modeling, while the other aspect of the stack is largely dependent on the client infrastructure I'm working for, so I often say I'm tool-agnostic. 😀 + I'm an Analytics Engineer with Data Culture, a Data Consulting firm where I use dbt regularly to help clients build quality-tested data assets. I've also got a background in financial services and supply chain. I'm passionate about helping organizations to become data-driven and I majorly use dbt for data modeling, while the other aspect of the stack is largely dependent on the client infrastructure I'm working for, so I often say I'm tool-agnostic. 😀 - I'm the founder of Nigeria's Young Data Professional Community. I'm also the organizer of the Lagos dbt Meetup which I started, and one of the organizers of the DataFest Africa Conference. I became an active member of the dbt Community in 2021 & spoke at Coalesce 2022. + I'm the founder of Nigeria's Young Data Professional Community. I'm also the organizer of the Lagos dbt Meetup which I started, and one of the organizers of the DataFest Africa Conference. I became an active member of the dbt Community in 2021 & spoke at Coalesce 2022. image: /img/community/spotlight/fabiyi-opeyemi.jpg pronouns: he/him location: Lagos, Nigeria @@ -16,7 +16,7 @@ socialLinks: link: https://twitter.com/Opiano_1 - name: LinkedIn link: https://www.linkedin.com/in/opeyemifabiyi/ -dateCreated: 2023-07-02 +dateCreated: 2023-11-06 hide_table_of_contents: true --- diff --git a/website/docs/community/spotlight/josh-devlin.md b/website/docs/community/spotlight/josh-devlin.md index 1a1db622209..d8a9b91c282 100644 --- a/website/docs/community/spotlight/josh-devlin.md +++ b/website/docs/community/spotlight/josh-devlin.md @@ -2,23 +2,26 @@ id: josh-devlin title: Josh Devlin description: | - After "discovering" dbt in early 2020, I joined the community and used it as a learning tool while I tried to get dbt introduced at my company. By helping others, I learned about common pitfalls, best practices, and the breadth of the tool. When it came time to implement it months later, I already felt like an expert! + Josh Devlin has a rich history of community involvement and technical expertise in both the dbt and wider analytics communities. - In December 2020 I attended the first virtual Coalesce conference, attending all 4 days across 3 time zones! I found my quirky-nerdy-purple-people, and felt at home. + Discovering dbt in early 2020, he quickly became an integral member of its community, leveraging the platform as a learning tool and aiding others along their dbt journey. Josh has helped thousands of dbt users with his advice and near-encyclopaedic knowledge of dbt. - 3 years later I had the pleasure of presenting at my first dbt Meetup in Sydney, and then at the first in-person Coalesce in New Orleans. My passion is helping people, and I'm glad that the dbt community gives me a place to do that! + Beyond the online community, he transitioned from being an attendee at the first virtual Coalesce conference in December 2020 to a presenter at the first in-person Coalesce event in New Orleans in 2022. He has also contributed to the dbt-core and dbt-snowflake codebases, helping improve the product in the most direct way. + + His continuous contributions echo his philosophy of learning through teaching, a principle that has not only enriched the dbt community but also significantly bolstered his proficiency with the tool, making him a valuable community member. + + Aside from his technical endeavors, Josh carries a heart for communal growth and an individual's ability to contribute to a larger whole, a trait mirrored in his earlier pursuits as an orchestral musician. His story is a blend of technical acumen, communal involvement, and a nuanced appreciation for the symbiotic relationship between teaching and learning, making him a notable figure in the analytics engineering space. image: /img/community/spotlight/josh-devlin.jpg pronouns: he/him location: Melbourne, Australia (but spent most of the last decade in Houston, USA) jobTitle: Senior Analytics Engineer companyName: Canva -organization: "" socialLinks: - name: Twitter link: https://twitter.com/JayPeeDevlin - name: LinkedIn link: https://www.linkedin.com/in/josh-devlin/ -dateCreated: 2023-06-27 +dateCreated: 2023-11-10 hide_table_of_contents: true --- diff --git a/website/docs/community/spotlight/karen-hsieh.md b/website/docs/community/spotlight/karen-hsieh.md index 1a5cc8c4788..5147f39ce59 100644 --- a/website/docs/community/spotlight/karen-hsieh.md +++ b/website/docs/community/spotlight/karen-hsieh.md @@ -12,7 +12,7 @@ description: | image: /img/community/spotlight/karen-hsieh.jpg pronouns: she/her location: Taipei, Taiwan -jobTitle: Director of Product & Data +jobTitle: Director of Tech & Data companyName: ALPHA Camp organization: "" socialLinks: @@ -22,7 +22,7 @@ socialLinks: link: https://www.linkedin.com/in/karenhsieh/ - name: Medium link: https://medium.com/@ijacwei -dateCreated: 2023-03-24 +dateCreated: 2023-11-04 hide_table_of_contents: true --- diff --git a/website/docs/community/spotlight/oliver-cramer.md b/website/docs/community/spotlight/oliver-cramer.md new file mode 100644 index 00000000000..bfd62db0908 --- /dev/null +++ b/website/docs/community/spotlight/oliver-cramer.md @@ -0,0 +1,35 @@ +--- +id: oliver-cramer +title: Oliver Cramer +description: | + When I joined Aquila Capital in early 2022, I had the ModernDataStack with SqlDBM, dbt & Snowflake available. During the first half year I joined the dbt community. I have been working in the business intelligence field for many years. In 2006 I founded the first TDWI Roudtable in the DACH region. I often speak at conferences, such as the Snowflake Summit and the German TDWI conference. + I have been very involved in the data vault community for over 20 years and I do a lot of work with dbt Labs’ Sean McIntyre and Victoria Mola to promote Data Vault in EMEA. I have even travelled to Canada and China to meet data vault community members! Currently I have a group looking at the Data Vault dbt packages. The German Data Vault User Group (DDVUG) has published a sample database to test Data Warehouse Automation tools. + In addition, I founded the Analytics Engineering Northern Germany Meetup Group, which will transition into an official dbt Meetup, the Northern Germany dbt Meetup. +image: /img/community/spotlight/oliver.jpg +pronouns: he/him +location: Celle, Germany +jobTitle: Lead Data Warehouse Architect +companyName: Aquila Capital +organization: TDWI Germany +socialLinks: + - name: LinkedIn + link: https://www.linkedin.com/in/oliver-cramer/ +dateCreated: 2023-11-02 +hide_table_of_contents: true +--- + +## When did you join the dbt community and in what way has it impacted your career? + +I joined the dbt community in 2022. My current focus is on building modern data teams. There is no magic formula for structuring your analytics function. Given the pace of technological change in our industry, the structure of a data team must evolve over time. + +## What dbt community leader do you identify with? How are you looking to grow your leadership in the dbt community? + +I like working with dbt Labs' Sean McIntyre to promote Data Vault in Europe and Victoria Perez Mola, also from dbt Labs, is always a great help when I have questions about dbt. + +## What have you learned from community members? What do you hope others can learn from you? + +I just think it's good to have a community, to be able to ask questions and get good answers. + +## Anything else interesting you want to tell us? + +Data Vault is actively looking forward to supporting the messaging that dbt Cloud (+packages) is a real alternative that works. diff --git a/website/docs/community/spotlight/sam-debruyn.md b/website/docs/community/spotlight/sam-debruyn.md new file mode 100644 index 00000000000..166adf58b09 --- /dev/null +++ b/website/docs/community/spotlight/sam-debruyn.md @@ -0,0 +1,37 @@ +--- +id: sam-debruyn +title: Sam Debruyn +description: | + I have a background of about 10 years in software engineering and moved to data engineering in 2020. Today, I lead dataroots's data & cloud unit on a technical level, allowing me to share knowledge and help multiple teams and customers, while still being hands-on every day. In 2021 and 2022, I did a lot of work on dbt-core and the dbt adapters for Microsoft SQL Server, Azure SQL, Azure Synapse, and now also Microsoft Fabric. I spoke at a few meetups and conferences about dbt and other technologies which I'm passionate about. Sharing knowledge is what drives me, so in 2023 I founded the Belgium dbt Meetup. Every meetup reached its maximum capacity ever since. +image: /img/community/spotlight/sam.jpg +pronouns: he/him +location: Heist-op-den-Berg, Belgium +jobTitle: Tech Lead Data & Cloud +companyName: dataroots +organization: "" +socialLinks: + - name: Twitter + link: https://twitter.com/s_debruyn + - name: LinkedIn + link: https://www.linkedin.com/in/samueldebruyn/ + - name: Blog + link: https://debruyn.dev/ +dateCreated: 2023-11-03 +hide_table_of_contents: true +--- + +## When did you join the dbt community and in what way has it impacted your career? + +I joined the dbt Community at the end of 2020, when we had dbt 0.18. At first, I was a bit suspicious. I thought to myself, how could a tool this simple make such a big difference? But after giving it a try, I was convinced: this is what we'll all be using for our data transformations in the future. dbt shines in its simplicity and very low learning curve. Thanks to dbt, a lot more people can become proficient in data analytics. I became a dbt evangelist, both at my job as well as in local and online data communities. I think that data holds the truth. And I think that the more people we can give access to work with data, so that they don't having to depend on others to work with complex tooling, the more we can achieve together. + +## What dbt community leader do you identify with? How are you looking to grow your leadership in the dbt community? + +It's hard to pick one person. There are lots of folks who inspired me along the way. There is Anders Swanson (known as dataders on Github), with whom I've spent countless hours discussing how we can bring two things I like together: dbt and the Microsoft SQL products. It's amazing to look back on what we achieved now that dbt Labs and Microsoft are working together to bring dbt support for Fabric and Synapse. There is also Jeremy Cohen (jerco) whose lengthy GitHub discussions bring inspiration to how you can do even more with dbt and what the future might hold. Cor Zuurmond (JCZuurmond) inspired me to start contributing to dbt-core, adapters, and related packages. He did an impressive amount of work by making dbt-spark even better, building pytest integration for dbt, and of course by bringing dbt to world's most used database: dbt-excel. + +## What have you learned from community members? What do you hope others can learn from you? + +dbt doesn't only shine when you're using it, but also under the hood. dbt's codebase is very approachable and consistently well written with code that is clean, elegant, and easy to understand. When you're thinking about a potential feature, a bugfix, or building integrations with dbt, just go to Slack or Github and see what you can do to make that happen. You can contribute by discussing potential features, adding documentation, writing code, and more. You don't need to be a Python expert to get started. + +## Anything else interesting you want to tell us? + +The dbt community is one of the biggest data communities globally, but also the most welcoming one. It's amazing how nice, friendly, and approachable everyone is. It's great to be part of this community. diff --git a/website/docs/community/spotlight/stacy-lo.md b/website/docs/community/spotlight/stacy-lo.md new file mode 100644 index 00000000000..f0b70fcc225 --- /dev/null +++ b/website/docs/community/spotlight/stacy-lo.md @@ -0,0 +1,40 @@ +--- +id: stacy-lo +title: Stacy Lo +description: | + I began my career as a data analyst, then transitioned to a few different roles in data and software development. Analytics Engineer is the best title to describe my expertise in data. + + I’ve been in the dbt Community for almost a year. In April, I shared my experience adopting dbt at the Taipei dbt Meetup, which inspired me to write technical articles. + + In Taiwan, the annual "iThome Iron Man Contest" happens in September, where participants post a technical article written in Mandarin every day for 30 consecutive days. Since no one has ever written about dbt in the contest, I'd like to be the first person, and that’s what I have been busy with for in the past couple of months. +image: /img/community/spotlight/stacy.jpg +pronouns: she/her +location: Taipei, Taiwan +jobTitle: Senior IT Developer +companyName: Teamson +socialLinks: + - name: LinkedIn + link: https://www.linkedin.com/in/olycats/ +dateCreated: 2023-11-01 +hide_table_of_contents: true +--- + +## When did you join the dbt community and in what way has it impacted your career? + +I joined dbt Slack on November 2022. It was the time our company decided to use dbt in our data architecture, so I joined the #local-taipei channel in dbt Slack and introduced myself. To my surprise, I was immediately invited to share my experience at a Taipei dbt Meetup. I just joined the community, never joined any other meetups, did not know anyone there, and was very new to dbt. + +The biggest impact to my career is that I gained a lot of visibility! I got to know a lot of great data people, and now I have one meetup presentation recorded on YouTube, 30 technical articles on iThome Iron Man Contest, and now I am featured in the dbt Community Spotlight! + +## What dbt community leader do you identify with? How are you looking to grow your leadership in the dbt community? + +Karen Hsieh is the best! She not only brought me in to the dbt Community by way of the #local-taipei channel in dbt Slack, but she also encouraged me to contribute to the community in many ways, without making me feel pressured. With her passion and leading style, Karen successfully built a friendly and diverse group of people in #local-taipei. + +I’d also like to recommend Bruno de Lima's LinkedIn posts. His 'dbt Tips of the Day' effectively delivery knowledge in a user-friendly way. In addition, I really enjoyed the dbt exam practice polls. Learning dbt can be a challenge, but Bruno makes it both easy and fun! + +## What have you learned from community members? What do you hope others can learn from you? + +I learned that there are many ways to contribute to the community, regardless of our background or skill level. Everyone has something valuable to offer, and we should never be afraid to share. Let's find our own ways to make an impact! + +## Anything else interesting you want to tell us? + +Although the #local-taipei channel in dbt Slack is not made up of many, many people, we still managed to assemble a team of 7 people to join the Iron Man Contest. We produced a total of 200 articles in 30 days in topics around dbt and data. I don’t know how many people will find them useful, but it's definitely a great start to raising awareness of dbt in Taiwan. diff --git a/website/docs/community/spotlight/sydney-burns.md b/website/docs/community/spotlight/sydney-burns.md new file mode 100644 index 00000000000..ecebd6cdec0 --- /dev/null +++ b/website/docs/community/spotlight/sydney-burns.md @@ -0,0 +1,34 @@ +--- +id: sydney-burns +title: Sydney Burns +description: | + In 2019, I started as an analytics intern at a healthcare tech startup. I learned about dbt in 2020 and joined the community to self-teach. The following year, I started using dbt professionally as a consultant, and was able to pick up various parts of the stack and dive into different implementations. That experience empowered me to strike a better balance between "best practices" and what suits a specific team best. I also spoke at Coalesce 2022, a highlight of my career! + + Now, I collaborate with other data professionals at Webflow, where focused on enhancing and scaling our data operations. I strive to share the same enthusiasm, support, and knowledge with my team that I've gained from the broader community! +image: /img/community/spotlight/sydney.jpg +pronouns: she/her +location: Panama City, FL, USA +jobTitle: Senior Analytics Engineer +companyName: Webflow +socialLinks: + - name: LinkedIn + link: https://www.linkedin.com/in/sydneyeburns/ +dateCreated: 2023-11-09 +hide_table_of_contents: true +--- + +## When did you join the dbt community and in what way has it impacted your career? + +The stack I used in my first data role was outdated and highly manual. Where I live, modern tech companies are few and far between, and I didn't have many in-person resources nor enough knowledge to realize that another world was possible at my skill level. I was thrilled to find a pocket of the Internet where similarly frustrated but creative data folks were sharing thoughtful solutions to problems I'd been struggling with! + +## What dbt community leader do you identify with? How are you looking to grow your leadership in the dbt community? + +Christine Berger was my first ever (best ever!) data colleague, and the one who first introduced me to dbt. + +There are certain qualities I've always valued in her, that I've found in many others across the community, and strive to cultivate in myself — earnestness, curiosity, creativity, and consistently doing good work with deep care. + +## What have you learned from community members? What do you hope others can learn from you? + +I spent too much time in my early career feeling scared to ask for help because I didn't want others to think I was incompetent. I'd spin my wheels on something for hours before finally asking someone to help me. + +The community has proven one thing to me time and time again: there are people here who will not only help you, but will be palpably *excited* to help you and share what they know, especially if it's clear you've made efforts to use your resources and try things on your own first. I'm one of those people now! diff --git a/website/docs/dbt-cli/cli-overview.md b/website/docs/dbt-cli/cli-overview.md deleted file mode 100644 index 3e44bab801b..00000000000 --- a/website/docs/dbt-cli/cli-overview.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -title: "CLI overview" -description: "Run your dbt project from the command line." ---- - -dbt Core ships with a command-line interface (CLI) for running your dbt project. dbt Core and its CLI are free to use and available as an [open source project](https://github.com/dbt-labs/dbt-core). - -When using the command line, you can run commands and do other work from the current or _working directory_ on your computer. Before running the dbt project from the command line, make sure the working directory is your dbt project directory. For more details, see "[Creating a dbt project](/docs/build/projects)." - - - - -Once you verify your dbt project is your working directory, you can execute dbt commands. A full list of dbt commands can be found in the [reference section](/reference/dbt-commands). - - - -:::tip Pro tip: Using the --help flag - -Most command-line tools, including dbt, have a `--help` flag that you can use to show available commands and arguments. For example, you can use the `--help` flag with dbt in two ways: -• `dbt --help`: Lists the commands available for dbt -• `dbt run --help`: Lists the flags available for the `run` command - -::: - diff --git a/website/docs/docs/about-setup.md b/website/docs/docs/about-setup.md index ceb34a5ccbb..1021c1b65ac 100644 --- a/website/docs/docs/about-setup.md +++ b/website/docs/docs/about-setup.md @@ -21,14 +21,14 @@ To begin configuring dbt now, select the option that is right for you. diff --git a/website/docs/docs/about/overview.md b/website/docs/docs/about/overview.md deleted file mode 100644 index e34866fa3fe..00000000000 --- a/website/docs/docs/about/overview.md +++ /dev/null @@ -1,53 +0,0 @@ ---- -title: "What is dbt? " -id: "overview" ---- - -dbt is a productivity tool that helps analysts get more done and produce higher quality results. - -Analysts commonly spend 50-80% of their time modeling raw data—cleaning, reshaping, and applying fundamental business logic to it. dbt empowers analysts to do this work better and faster. - -dbt's primary interface is its CLI. Using dbt is a combination of editing code in a text editor and running that code using dbt from the command line using `dbt [command] [options]`. - -# How does dbt work? - -dbt has two core workflows: building data models and testing data models. (We call any transformed of raw data a data model.) - -To create a data model, an analyst simply writes a SQL `SELECT` statement. dbt then takes that statement and builds it in the database, materializing it as either a view or a . This model can then be queried by other models or by other analytics tools. - -To test a data model, an analyst asserts something to be true about the underlying data. For example, an analyst can assert that a certain field should never be null, should always hold unique values, or should always map to a field in another table. Analysts can also write assertions that express much more customized logic, such as “debits and credits should always be equal within a given journal entry”. dbt then tests all assertions against the database and returns success or failure responses. - -# Does dbt really help me get more done? - -One dbt user has this to say: *“At this point when I have a new question, I can answer it 10-100x faster than I could before.”* Here’s how: - -- dbt allows analysts to avoid writing boilerplate and : managing transactions, dropping tables, and managing schema changes. All business logic is expressed in SQL `SELECT` statements, and dbt takes care of . -- dbt creates leverage. Instead of starting at the raw data with every analysis, analysts instead build up reusable data models that can be referenced in subsequent work. -- dbt includes optimizations for data model materialization, allowing analysts to dramatically reduce the time their queries take to run. - -There are many other optimizations in the dbt to help you work quickly: macros, hooks, and package management are all accelerators. - -# Does dbt really help me produce more reliable analysis? - -It does. Here’s how: - -- Writing SQL frequently involves a lot of copy-paste, which leads to errors when logic changes. With dbt, analysts don’t need to copy-paste. Instead, they build reusable data models that then get pulled into subsequent models and analysis. Change a model once and everything that relies on it reflects that change. -- dbt allows subject matter experts to publish the canonical version of a particular data model, encapsulating all complex business logic. All analysis on top of this model will incorporate the same business logic without needing to understand it. -- dbt plays nicely with source control. Using dbt, analysts can use mature source control processes like branching, pull requests, and code reviews. -- dbt makes it easy and fast to write functional tests on the underlying data. Many analytic errors are caused by edge cases in the data: testing helps analysts find and handle those edge cases. - -# Why SQL? - -While there are a large number of great languages for manipulating data, we’ve chosen SQL as the primary [data transformation](https://www.getdbt.com/analytics-engineering/transformation/) language at the heart of dbt. There are three reasons for this: - -1. SQL is a very widely-known language for working with data. Using SQL gives the largest-possible group of users access. -2. Modern analytic databases are extremely performant and have sophisticated optimizers. Writing data transformations in SQL allows users to describe transformations on their data but leave the execution plan to the underlying database technology. In practice, this provides excellent results with far less work on the part of the author. -3. SQL `SELECT` statements enjoy a built-in structure for describing dependencies: `FROM X` and `JOIN Y`. This results in less setup and maintenance overhead in ensuring that transforms execute in the correct order, compared to other languages and tools. - -# What databases does dbt currently support? - -See [Supported Data Platforms](/docs/supported-data-platforms) to view the full list of supported databases, warehouses, and query engines. - -# How do I get started? - -dbt is open source and completely free to download and use. See our [Getting Started guide](/docs/introduction) for more. diff --git a/website/docs/docs/build/about-metricflow.md b/website/docs/docs/build/about-metricflow.md index d76715c46a1..ea2efcabf06 100644 --- a/website/docs/docs/build/about-metricflow.md +++ b/website/docs/docs/build/about-metricflow.md @@ -82,7 +82,7 @@ The following example data is based on the Jaffle Shop repo. You can view the co To make this more concrete, consider the metric `order_total`, which is defined using the SQL expression: `select sum(order_total) as order_total from orders` -This expression calculates the revenue from each order by summing the order_total column in the orders table. In a business setting, the metric order_total is often calculated according to different categories, such as" +This expression calculates the total revenue for all orders by summing the order_total column in the orders table. In a business setting, the metric order_total is often calculated according to different categories, such as" - Time, for example `date_trunc(ordered_at, 'day')` - Order Type, using `is_food_order` dimension from the `orders` table. diff --git a/website/docs/docs/build/build-metrics-intro.md b/website/docs/docs/build/build-metrics-intro.md index cdac51224ed..24af2a0864a 100644 --- a/website/docs/docs/build/build-metrics-intro.md +++ b/website/docs/docs/build/build-metrics-intro.md @@ -14,7 +14,7 @@ Use MetricFlow in dbt to centrally define your metrics. As a key component of th MetricFlow allows you to: - Intuitively define metrics in your dbt project -- Develop from your preferred environment, whether that's the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation), [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud), or [dbt Core](/docs/core/installation) +- Develop from your preferred environment, whether that's the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation), [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud), or [dbt Core](/docs/core/installation-overview) - Use [MetricFlow commands](/docs/build/metricflow-commands) to query and test those metrics in your development environment - Harness the true magic of the universal dbt Semantic Layer and dynamically query these metrics in downstream tools (Available for dbt Cloud [Team or Enterprise](https://www.getdbt.com/pricing/) accounts only). diff --git a/website/docs/docs/build/cumulative-metrics.md b/website/docs/docs/build/cumulative-metrics.md index 708045c1f3e..45a136df751 100644 --- a/website/docs/docs/build/cumulative-metrics.md +++ b/website/docs/docs/build/cumulative-metrics.md @@ -38,10 +38,7 @@ metrics: ## Limitations Cumulative metrics are currently under active development and have the following limitations: - -1. You can only use the [`metric_time` dimension](/docs/build/dimensions#time) to check cumulative metrics. If you don't use `metric_time` in the query, the cumulative metric will return incorrect results because it won't perform the time spine join. This means you cannot reference time dimensions other than the `metric_time` in the query. -2. If you use `metric_time` in your query filter but don't include "start_time" and "end_time," cumulative metrics will left-censor the input data. For example, if you query a cumulative metric with a 7-day window with the filter `{{ TimeDimension('metric_time') }} BETWEEN '2023-08-15' AND '2023-08-30' `, the values for `2023-08-15` to `2023-08-20` return missing or incomplete data. This is because we apply the `metric_time` filter to the aggregation input. To avoid this, you must use `start_time` and `end_time` in the query filter. - +- You are required to use [`metric_time` dimension](/docs/build/dimensions#time) when querying cumulative metrics. If you don't use `metric_time` in the query, the cumulative metric will return incorrect results because it won't perform the time spine join. This means you cannot reference time dimensions other than the `metric_time` in the query. ## Cumulative metrics example diff --git a/website/docs/docs/build/dimensions.md b/website/docs/docs/build/dimensions.md index b8679fe11b0..683ff730d3c 100644 --- a/website/docs/docs/build/dimensions.md +++ b/website/docs/docs/build/dimensions.md @@ -15,7 +15,8 @@ In a data platform, dimensions is part of a larger structure called a semantic m Groups are defined within semantic models, alongside entities and measures, and correspond to non-aggregatable columns in your dbt model that provides categorical or time-based context. In SQL, dimensions is typically included in the GROUP BY clause.--> -All dimensions require a `name`, `type` and in some cases, an `expr` parameter. +All dimensions require a `name`, `type` and in some cases, an `expr` parameter. The `name` for your dimension must be unique to the semantic model and can not be the same as an existing `entity` or `measure` within that same model. + | Parameter | Description | Type | | --------- | ----------- | ---- | diff --git a/website/docs/docs/build/entities.md b/website/docs/docs/build/entities.md index 464fa2c3b8c..e44f9e79af6 100644 --- a/website/docs/docs/build/entities.md +++ b/website/docs/docs/build/entities.md @@ -8,7 +8,7 @@ tags: [Metrics, Semantic Layer] Entities are real-world concepts in a business such as customers, transactions, and ad campaigns. We often focus our analyses around specific entities, such as customer churn or annual recurring revenue modeling. We represent entities in our semantic models using id columns that serve as join keys to other semantic models in your semantic graph. -Within a semantic graph, the required parameters for an entity are `name` and `type`. The `name` refers to either the key column name from the underlying data table, or it may serve as an alias with the column name referenced in the `expr` parameter. +Within a semantic graph, the required parameters for an entity are `name` and `type`. The `name` refers to either the key column name from the underlying data table, or it may serve as an alias with the column name referenced in the `expr` parameter. The `name` for your entity must be unique to the semantic model and can not be the same as an existing `measure` or `dimension` within that same model. Entities can be specified with a single column or multiple columns. Entities (join keys) in a semantic model are identified by their name. Each entity name must be unique within a semantic model, but it doesn't have to be unique across different semantic models. diff --git a/website/docs/docs/build/environment-variables.md b/website/docs/docs/build/environment-variables.md index 55d3fd19c6c..3f2aebd0036 100644 --- a/website/docs/docs/build/environment-variables.md +++ b/website/docs/docs/build/environment-variables.md @@ -103,6 +103,8 @@ dbt Cloud has a number of pre-defined variables built in. The following environm - `DBT_CLOUD_RUN_ID`: The ID of this particular run - `DBT_CLOUD_RUN_REASON_CATEGORY`: The "category" of the trigger for this run (one of: `scheduled`, `github_pull_request`, `gitlab_merge_request`, `azure_pull_request`, `other`) - `DBT_CLOUD_RUN_REASON`: The specific trigger for this run (eg. `Scheduled`, `Kicked off by `, or custom via `API`) +- `DBT_CLOUD_ENVIRONMENT_ID`: The ID of the environment for this run +- `DBT_CLOUD_ACCOUNT_ID`: The ID of the dbt Cloud account for this run **Git details** diff --git a/website/docs/docs/build/materializations.md b/website/docs/docs/build/materializations.md index 8846f4bb0c5..192284a31ca 100644 --- a/website/docs/docs/build/materializations.md +++ b/website/docs/docs/build/materializations.md @@ -14,6 +14,8 @@ pagination_next: "docs/build/incremental-models" - ephemeral - materialized view +You can also configure [custom materializations](/guides/create-new-materializations?step=1) in dbt. Custom materializations are a powerful way to extend dbt's functionality to meet your specific needs. + ## Configuring materializations By default, dbt models are materialized as "views". Models can be configured with a different materialization by supplying the `materialized` configuration parameter as shown below. diff --git a/website/docs/docs/build/measures.md b/website/docs/docs/build/measures.md index e06b5046976..feea2b30ca4 100644 --- a/website/docs/docs/build/measures.md +++ b/website/docs/docs/build/measures.md @@ -6,19 +6,13 @@ sidebar_label: "Measures" tags: [Metrics, Semantic Layer] --- -Measures are aggregations performed on columns in your model. They can be used as final metrics or serve as building blocks for more complex metrics. Measures have several inputs, which are described in the following table along with their field types. +Measures are aggregations performed on columns in your model. They can be used as final metrics or serve as building blocks for more complex metrics. -| Parameter | Description | Type | -| --------- | ----------- | ---- | -| [`name`](#name) | Provide a name for the measure, which must be unique and can't be repeated across all semantic models in your dbt project. | Required | -| [`description`](#description) | Describes the calculated measure. | Optional | -| [`agg`](#aggregation) | dbt supports aggregations such as `sum`, `min`, `max`, and more. Refer to [Aggregation](/docs/build/measures#aggregation) for the full list of supported aggregation types. | Required | -| [`expr`](#expr) | You can either reference an existing column in the table or use a SQL expression to create or derive a new one. | Optional | -| [`non_additive_dimension`](#non-additive-dimensions) | Non-additive dimensions can be specified for measures that cannot be aggregated over certain dimensions, such as bank account balances, to avoid producing incorrect results. | Optional | -| `agg_params` | specific aggregation properties such as a percentile. | Optional | -| `agg_time_dimension` | The time field. Defaults to the default agg time dimension for the semantic model. | Optional | -| `label` | How the metric appears in project docs and downstream integrations. | Required | +Measures have several inputs, which are described in the following table along with their field types. +import MeasuresParameters from '/snippets/_sl-measures-parameters.md'; + + ## Measure spec @@ -40,7 +34,8 @@ measures: When you create a measure, you can either give it a custom name or use the `name` of the data platform column directly. If the `name` of the measure is different from the column name, you need to add an `expr` to specify the column name. The `name` of the measure is used when creating a metric. -Measure names must be **unique** across all semantic models in a project. +Measure names must be unique across all semantic models in a project and can not be the same as an existing `entity` or `dimension` within that same model. + ### Description diff --git a/website/docs/docs/build/metricflow-commands.md b/website/docs/docs/build/metricflow-commands.md index 4d2477ad2ed..7e535e4ea62 100644 --- a/website/docs/docs/build/metricflow-commands.md +++ b/website/docs/docs/build/metricflow-commands.md @@ -8,7 +8,7 @@ tags: [Metrics, Semantic Layer] Once you define metrics in your dbt project, you can query metrics, dimensions, and dimension values, and validate your configs using the MetricFlow commands. -MetricFlow allows you to define and query metrics in your dbt project in the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation), [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud), or [dbt Core](/docs/core/installation). To experience the power of the universal [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl) and dynamically query those metrics in downstream tools, you'll need a dbt Cloud [Team or Enterprise](https://www.getdbt.com/pricing/) account. +MetricFlow allows you to define and query metrics in your dbt project in the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation), [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud), or [dbt Core](/docs/core/installation-overview). To experience the power of the universal [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl) and dynamically query those metrics in downstream tools, you'll need a dbt Cloud [Team or Enterprise](https://www.getdbt.com/pricing/) account. MetricFlow is compatible with Python versions 3.8, 3.9, 3.10, and 3.11. @@ -17,7 +17,7 @@ MetricFlow is compatible with Python versions 3.8, 3.9, 3.10, and 3.11. MetricFlow is a dbt package that allows you to define and query metrics in your dbt project. You can use MetricFlow to query metrics in your dbt project in the dbt Cloud CLI, dbt Cloud IDE, or dbt Core. -**Note** — MetricFlow commands aren't supported in dbt Cloud jobs yet. However, you can add MetricFlow validations with your git provider (such as GitHub Actions) by installing MetricFlow (`pip install metricflow`). This allows you to run MetricFlow commands as part of your continuous integration checks on PRs. +**Note** — MetricFlow commands aren't supported in dbt Cloud jobs yet. However, you can add MetricFlow validations with your git provider (such as GitHub Actions) by installing MetricFlow (`python -m pip install metricflow`). This allows you to run MetricFlow commands as part of your continuous integration checks on PRs. @@ -54,7 +54,7 @@ You can install [MetricFlow](https://github.com/dbt-labs/metricflow#getting-star 1. Create or activate your virtual environment `python -m venv venv` 2. Run `pip install dbt-metricflow` - * You can install MetricFlow using PyPI as an extension of your dbt adapter in the command line. To install the adapter, run `pip install "dbt-metricflow[your_adapter_name]"` and add the adapter name at the end of the command. For example, for a Snowflake adapter run `pip install "dbt-metricflow[snowflake]"` + * You can install MetricFlow using PyPI as an extension of your dbt adapter in the command line. To install the adapter, run `python -m pip install "dbt-metricflow[your_adapter_name]"` and add the adapter name at the end of the command. For example, for a Snowflake adapter run `python -m pip install "dbt-metricflow[snowflake]"` **Note**, you'll need to manage versioning between dbt Core, your adapter, and MetricFlow. diff --git a/website/docs/docs/build/packages.md b/website/docs/docs/build/packages.md index 8d18a55e949..b60b4ba5b5e 100644 --- a/website/docs/docs/build/packages.md +++ b/website/docs/docs/build/packages.md @@ -25,15 +25,9 @@ dbt _packages_ are in fact standalone dbt projects, with models and macros that * It's important to note that defining and installing dbt packages is different from [defining and installing Python packages](/docs/build/python-models#using-pypi-packages) -:::info `dependencies.yml` has replaced `packages.yml` -Starting from dbt v1.6, `dependencies.yml` has replaced `packages.yml`. This file can now contain both types of dependencies: "package" and "project" dependencies. -- "Package" dependencies lets you add source code from someone else's dbt project into your own, like a library. -- "Project" dependencies provide a different way to build on top of someone else's work in dbt. Refer to [Project dependencies](/docs/collaborate/govern/project-dependencies) for more info. -- -You can rename `packages.yml` to `dependencies.yml`, _unless_ you need to use Jinja within your packages specification. This could be necessary, for example, if you want to add an environment variable with a git token in a private git package specification. - -::: +import UseCaseInfo from '/snippets/_packages_or_dependencies.md'; + ## How do I add a package to my project? 1. Add a file named `dependencies.yml` or `packages.yml` to your dbt project. This should be at the same level as your `dbt_project.yml` file. diff --git a/website/docs/docs/build/saved-queries.md b/website/docs/docs/build/saved-queries.md index 39a4b2e52fd..7b88a052726 100644 --- a/website/docs/docs/build/saved-queries.md +++ b/website/docs/docs/build/saved-queries.md @@ -6,10 +6,6 @@ sidebar_label: "Saved queries" tags: [Metrics, Semantic Layer] --- -:::info Saved queries coming soon -Saved queries isn't currently available in MetricFlow but support is coming soon. -::: - Saved queries are a way to save commonly used queries in MetricFlow. You can group metrics, dimensions, and filters that are logically related into a saved query. To define a saved query, refer to the following specification: @@ -18,24 +14,23 @@ To define a saved query, refer to the following specification: | --------- | ----------- | ---- | | `name` | The name of the metric. | Required | | `description` | The description of the metric. | Optional | -| `metrics` | The metrics included in the saved query. | Required | -| `group_bys` | The value displayed in downstream tools. | Required | -| `where` | Filter applied to the query. | Optional | +| `query_params` | The query parameters for the saved query: `metrics`, `group_by`, and `where`. | Required | The following is an example of a saved query: ```yaml -saved_query: +saved_queries: name: p0_booking description: Booking-related metrics that are of the highest priority. - metrics: - - bookings - - instant_bookings - group_bys: - - TimeDimension('metric_time', 'day') - - Dimension('listing__capacity_latest') - where: - - "{{ Dimension('listing__capacity_latest') }} > 3" + query_params: + metrics: + - bookings + - instant_bookings + group_by: + - TimeDimension('metric_time', 'day') + - Dimension('listing__capacity_latest') + where: + - "{{ Dimension('listing__capacity_latest') }} > 3" ``` ### FAQs diff --git a/website/docs/docs/build/semantic-models.md b/website/docs/docs/build/semantic-models.md index 118e93a26b1..09f808d7a17 100644 --- a/website/docs/docs/build/semantic-models.md +++ b/website/docs/docs/build/semantic-models.md @@ -40,17 +40,17 @@ The complete spec for semantic models is below: ```yaml semantic_models: - - name: the_name_of_the_semantic_model ## Required - description: same as always ## Optional - model: ref('some_model') ## Required - defaults: ## Required - agg_time_dimension: dimension_name ## Required if the model contains dimensions - entities: ## Required - - see more information in entities - measures: ## Optional - - see more information in measures section - dimensions: ## Required - - see more information in dimensions section + - name: the_name_of_the_semantic_model ## Required + description: same as always ## Optional + model: ref('some_model') ## Required + default: ## Required + agg_time_dimension: dimension_name ## Required if the model contains dimensions + entities: ## Required + - see more information in entities + measures: ## Optional + - see more information in measures section + dimensions: ## Required + - see more information in dimensions section primary_entity: >- if the semantic model has no primary entity, then this property is required. #Optional if a primary entity exists, otherwise Required ``` @@ -123,14 +123,18 @@ semantic_models: config: enabled: true | false group: some_group + meta: + some_key: some_value ``` Semantic model config in `dbt_project.yml`: ```yml -semantic_models: +semantic-models: my_project_name: +enabled: true | false +group: some_group + +meta: + some_key: some_value ``` @@ -226,16 +230,14 @@ For semantic models with a measure, you must have a [primary time group](/docs/b ### Measures -[Measures](/docs/build/measures) are aggregations applied to columns in your data model. They can be used as the foundational building blocks for more complex metrics, or be the final metric itself. Measures have various parameters which are listed in a table along with their descriptions and types. +[Measures](/docs/build/measures) are aggregations applied to columns in your data model. They can be used as the foundational building blocks for more complex metrics, or be the final metric itself. + +Measures have various parameters which are listed in a table along with their descriptions and types. + +import MeasuresParameters from '/snippets/_sl-measures-parameters.md'; + + -| Parameter | Description | Field type | -| --- | --- | --- | -| `name`| Provide a name for the measure, which must be unique and can't be repeated across all semantic models in your dbt project. | Required | -| `description` | Describes the calculated measure. | Optional | -| `agg` | dbt supports the following aggregations: `sum`, `max`, `min`, `count_distinct`, and `sum_boolean`. | Required | -| `expr` | You can either reference an existing column in the table or use a SQL expression to create or derive a new one. | Optional | -| `non_additive_dimension` | Non-additive dimensions can be specified for measures that cannot be aggregated over certain dimensions, such as bank account balances, to avoid producing incorrect results. | Optional | -| `create_metric` | You can create a metric directly from a measure with `create_metric: True` and specify its display name with create_metric_display_name. Default is false. | Optional | import SetUpPages from '/snippets/_metrics-dependencies.md'; diff --git a/website/docs/docs/cloud/about-cloud-develop-defer.md b/website/docs/docs/cloud/about-cloud-develop-defer.md index 1861a6d8a79..37bfaacfd0c 100644 --- a/website/docs/docs/cloud/about-cloud-develop-defer.md +++ b/website/docs/docs/cloud/about-cloud-develop-defer.md @@ -7,18 +7,15 @@ pagination_next: "docs/cloud/cloud-cli-installation" --- -[Defer](/reference/node-selection/defer) is a powerful feature that allows developers to only build and run and test models they've edited without having to first run and build all the models that come before them (upstream parents). This is powered by using a production manifest for comparison, and dbt will resolve the `{{ ref() }}` function with upstream production artifacts. +[Defer](/reference/node-selection/defer) is a powerful feature that allows developers to only build and run and test models they've edited without having to first run and build all the models that come before them (upstream parents). dbt powers this by using a production manifest for comparison, and resolves the `{{ ref() }}` function with upstream production artifacts. -By default, dbt follows these rules: - -- Defers to the production environment when there's no development schema. -- If a development schema exists, dbt will prioritize those changes, which minimizes development time and avoids unnecessary model builds. +Both the dbt Cloud IDE and the dbt Cloud CLI enable users to natively defer to production metadata directly in their development workflows. -Both the dbt Cloud IDE and the dbt Cloud CLI allow users to natively defer to production metadata directly in their development workflows. +By default, dbt follows these rules: -For specific scenarios: -- Use [`--favor-state`](/reference/node-selection/defer#favor-state) to always use production artifacts to resolve the ref. -- If facing issues with outdated tables in the development schema, `--favor-state` is an alternative to defer. +- dbt uses the production locations of parent models to resolve `{{ ref() }}` functions, based on metadata from the production environment. +- If a development version of a deferred model exists, dbt preferentially uses the development database location when resolving the reference. +- Passing the [`--favor-state`](/reference/node-selection/defer#favor-state) flag overrides the default behavior and _always_ resolve refs using production metadata, regardless of the presence of a development relation. For a clean slate, it's a good practice to drop the development schema at the start and end of your development cycle. diff --git a/website/docs/docs/cloud/about-cloud-develop.md b/website/docs/docs/cloud/about-cloud-develop.md deleted file mode 100644 index 90abbb98bf4..00000000000 --- a/website/docs/docs/cloud/about-cloud-develop.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: About developing in dbt Cloud -id: about-cloud-develop -description: "Learn how to develop your dbt projects using dbt Cloud." -sidebar_label: "About developing in dbt Cloud" -pagination_next: "docs/cloud/cloud-cli-installation" -hide_table_of_contents: true ---- - -dbt Cloud offers a fast and reliable way to work on your dbt project. It runs dbt Core in a hosted (single or multi-tenant) environment. You can develop in your browser using an integrated development environment (IDE) or in a dbt Cloud-powered command line interface (CLI): - -
- - - - - -

- -The following sections provide detailed instructions on setting up the dbt Cloud CLI and dbt Cloud IDE. To get started with dbt development, you'll need a [developer](/docs/cloud/manage-access/seats-and-users) account. For a more comprehensive guide about developing in dbt, refer to our [quickstart guides](/guides). - - ---------- -**Note**: The dbt Cloud CLI and the open-sourced dbt Core are both command line tools that let you run dbt commands. The key distinction is the dbt Cloud CLI is tailored for dbt Cloud's infrastructure and integrates with all its [features](/docs/cloud/about-cloud/dbt-cloud-features). - diff --git a/website/docs/docs/cloud/about-cloud-setup.md b/website/docs/docs/cloud/about-cloud-setup.md index 5c8e5525bf1..7daf33a4684 100644 --- a/website/docs/docs/cloud/about-cloud-setup.md +++ b/website/docs/docs/cloud/about-cloud-setup.md @@ -13,14 +13,13 @@ dbt Cloud is the fastest and most reliable way to deploy your dbt jobs. It conta - Configuring access to [GitHub](/docs/cloud/git/connect-github), [GitLab](/docs/cloud/git/connect-gitlab), or your own [git repo URL](/docs/cloud/git/import-a-project-by-git-url). - [Managing users and licenses](/docs/cloud/manage-access/seats-and-users) - [Configuring secure access](/docs/cloud/manage-access/about-user-access) -- Configuring the [dbt Cloud IDE](/docs/cloud/about-cloud-develop) -- Installing and configuring the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation) These settings are intended for dbt Cloud administrators. If you need a more detailed first-time setup guide for specific data platforms, read our [quickstart guides](/guides). If you want a more in-depth learning experience, we recommend taking the dbt Fundamentals on our [dbt Learn online courses site](https://courses.getdbt.com/). ## Prerequisites + - To set up dbt Cloud, you'll need to have a dbt Cloud account with administrator access. If you still need to create a dbt Cloud account, [sign up today](https://getdbt.com) on our North American servers or [contact us](https://getdbt.com/contact) for international options. - To have the best experience using dbt Cloud, we recommend you use modern and up-to-date web browsers like Chrome, Safari, Edge, and Firefox. diff --git a/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md b/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md index 64743f85afa..cc1c2531f56 100644 --- a/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md +++ b/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md @@ -12,7 +12,7 @@ dbt Cloud is [hosted](/docs/cloud/about-cloud/architecture) in multiple regions | Region | Location | Access URL | IP addresses | Developer plan | Team plan | Enterprise plan | |--------|----------|------------|--------------|----------------|-----------|-----------------| | North America multi-tenant [^1] | AWS us-east-1 (N. Virginia) | cloud.getdbt.com | 52.45.144.63
54.81.134.249
52.22.161.231 | ✅ | ✅ | ✅ | -| North America Cell 1 [^1] | AWS us-east-1 (N.Virginia) | {account prefix}.us1.dbt.com | [Located in Account Settings](#locating-your-dbt-cloud-ip-addresses) | ❌ | ❌ | ❌ | +| North America Cell 1 [^1] | AWS us-east-1 (N.Virginia) | {account prefix}.us1.dbt.com | [Located in Account Settings](#locating-your-dbt-cloud-ip-addresses) | ❌ | ❌ | ✅ | | EMEA [^1] | AWS eu-central-1 (Frankfurt) | emea.dbt.com | 3.123.45.39
3.126.140.248
3.72.153.148 | ❌ | ❌ | ✅ | | APAC [^1] | AWS ap-southeast-2 (Sydney)| au.dbt.com | 52.65.89.235
3.106.40.33
13.239.155.206
| ❌ | ❌ | ✅ | | Virtual Private dbt or Single tenant | Customized | Customized | Ask [Support](/community/resources/getting-help#dbt-cloud-support) for your IPs | ❌ | ❌ | ✅ | diff --git a/website/docs/docs/cloud/about-develop-dbt.md b/website/docs/docs/cloud/about-develop-dbt.md new file mode 100644 index 00000000000..a71c32d5352 --- /dev/null +++ b/website/docs/docs/cloud/about-develop-dbt.md @@ -0,0 +1,30 @@ +--- +title: About developing in dbt +id: about-develop-dbt +description: "Learn how to develop your dbt projects using dbt Cloud." +sidebar_label: "About developing in dbt" +pagination_next: "docs/cloud/about-cloud-develop-defer" +hide_table_of_contents: true +--- + +Develop dbt projects using dbt Cloud, which offers a fast and reliable way to work on your dbt project. It runs dbt Core in a hosted (single or multi-tenant) environment. + +You can develop in your browser using an integrated development environment (IDE) or in a dbt Cloud-powered command line interface (CLI). + +
+ + + + + +

+ +To get started with dbt development, you'll need a [dbt Cloud](https://www.getdbt.com/signup) account and developer seat. For a more comprehensive guide about developing in dbt, refer to our [quickstart guides](/guides). diff --git a/website/docs/docs/cloud/cloud-cli-installation.md b/website/docs/docs/cloud/cloud-cli-installation.md index 70ae74c3df7..f3294477611 100644 --- a/website/docs/docs/cloud/cloud-cli-installation.md +++ b/website/docs/docs/cloud/cloud-cli-installation.md @@ -73,7 +73,9 @@ Before you begin, make sure you have [Homebrew installed](http://brew.sh/) in yo * Note that you no longer need to run the `dbt deps` command when your environment starts. This step was previously required during initialization. However, you should still run `dbt deps` if you make any changes to your `packages.yml` file. -4. After you've verified the installation, [configure](/docs/cloud/configure-cloud-cli) the dbt Cloud CLI for your dbt Cloud project and use it to run [dbt commands](/reference/dbt-commands) similar to dbt Core. For example, execute `dbt compile` to compile a project using dbt Cloud and validate your models and tests. +4. Clone your repository to your local computer using `git clone`. For example, to clone a GitHub repo using HTTPS format, run `git clone https://github.com/YOUR-USERNAME/YOUR-REPOSITORY`. + +5. After cloning your repo, [configure](/docs/cloud/configure-cloud-cli) the dbt Cloud CLI for your dbt Cloud project. This lets you run dbt commands like `dbt compile` to compile your project and validate models and tests. You can also add, edit, and synchronize files with your repo. @@ -102,7 +104,9 @@ Note that if you are using VS Code, you must restart it to pick up modified envi * Note that you no longer need to run the `dbt deps` command when your environment starts. This step was previously required during initialization. However, you should still run `dbt deps` if you make any changes to your `packages.yml` file. -4. After installation, [configure](/docs/cloud/configure-cloud-cli) the dbt Cloud CLI for your dbt Cloud project and use it to run [dbt commands](/reference/dbt-commands) similar to dbt Core. For example, execute `dbt compile`, to compile a project using dbt Cloud and confirm that it works. +4. Clone your repository to your local computer using `git clone`. For example, to clone a GitHub repo using HTTPS format, run `git clone https://github.com/YOUR-USERNAME/YOUR-REPOSITORY`. + +5. After cloning your repo, [configure](/docs/cloud/configure-cloud-cli) the dbt Cloud CLI for your dbt Cloud project. This lets you run dbt commands like `dbt compile` to compile your project and validate models and tests. You can also add, edit, and synchronize files with your repo. @@ -134,7 +138,9 @@ Advanced users can configure multiple projects to use the same Cloud CLI executa * Note that you no longer need to run the `dbt deps` command when your environment starts. This step was previously required during initialization. However, you should still run `dbt deps` if you make any changes to your `packages.yml` file. -4. After installation, [configure](/docs/cloud/configure-cloud-cli) the dbt Cloud CLI for your dbt Cloud project and use it to run [dbt commands](/reference/dbt-commands) similar to dbt Core. For example, execute `dbt compile`, to compile a project using dbt Cloud and confirm that it works. +4. Clone your repository to your local computer using `git clone`. For example, to clone a GitHub repo using HTTPS format, run `git clone https://github.com/YOUR-USERNAME/YOUR-REPOSITORY`. + +5. After cloning your repo, [configure](/docs/cloud/configure-cloud-cli) the dbt Cloud CLI for your dbt Cloud project. This lets you run dbt commands like `dbt compile` to compile your project and validate models and tests. You can also add, edit, and synchronize files with your repo. @@ -149,9 +155,9 @@ If you already have dbt Core installed, the dbt Cloud CLI may conflict. Here are - Uninstall the dbt Cloud CLI using the command: `pip uninstall dbt` - Reinstall dbt Core using the following command, replacing "adapter_name" with the appropriate adapter name: ```shell - pip install dbt-adapter_name --force-reinstall + python -m pip install dbt-adapter_name --force-reinstall ``` - For example, if I used Snowflake as an adapter, I would run: `pip install dbt-snowflake --force-reinstall` + For example, if I used Snowflake as an adapter, I would run: `python -m pip install dbt-snowflake --force-reinstall` -------- @@ -187,17 +193,19 @@ We recommend using virtual environments (venv) to namespace `cloud-cli`. 2. Make sure you're in your virtual environment and run the following command to install the dbt Cloud CLI: ```bash - pip3 install dbt + pip install dbt --no-cache-dir ``` -1. (Optional) To revert back to dbt Core, first uninstall both the dbt Cloud CLI and dbt Core. Then reinstall dbt Core. +3. (Optional) To revert to dbt Core, first uninstall both the dbt Cloud CLI and dbt Core. Then reinstall dbt Core. ```bash - pip3 uninstall dbt-core dbt + pip uninstall dbt-core dbt pip install dbt-adapter_name --force-reinstall ``` -4. After you've verified the installation, [configure](/docs/cloud/configure-cloud-cli) the dbt Cloud CLI for your dbt Cloud project. You can then use it to run [dbt commands](/reference/dbt-commands) similar to dbt Core. For example, execute `dbt compile` to compile a project using dbt Cloud and validate your models and tests. +4. Clone your repository to your local computer using `git clone`. For example, to clone a GitHub repo using HTTPS format, run `git clone https://github.com/YOUR-USERNAME/YOUR-REPOSITORY`. + +5. After cloning your repo, [configure](/docs/cloud/configure-cloud-cli) the dbt Cloud CLI for your dbt Cloud project. This lets you run dbt commands like `dbt compile` to compile your project and validate models and tests. You can also add, edit, and synchronize files with your repo. @@ -235,7 +243,7 @@ To update, follow the same process explained in [Windows](/docs/cloud/cloud-cli- To update: - Make sure you're in your virtual environment -- Run `pip install --upgrade dbt`. +- Run `python -m pip install --upgrade dbt`. diff --git a/website/docs/docs/cloud/configure-cloud-cli.md b/website/docs/docs/cloud/configure-cloud-cli.md index 35f82cff8cf..d6fca00cf25 100644 --- a/website/docs/docs/cloud/configure-cloud-cli.md +++ b/website/docs/docs/cloud/configure-cloud-cli.md @@ -76,6 +76,8 @@ Once you install the dbt Cloud CLI, you need to configure it to connect to a dbt - To find your project ID, select **Develop** in the dbt Cloud navigation menu. You can use the URL to find the project ID. For example, in `https://cloud.getdbt.com/develop/26228/projects/123456`, the project ID is `123456`. +6. You can now [use the dbt Cloud CLI](#use-the-dbt-cloud-cli) and run [dbt commands](/reference/dbt-commands) like `dbt compile`. With your repo recloned, you can add, edit, and sync files with your repo. + ### Set environment variables To set environment variables in the dbt Cloud CLI for your dbt project: diff --git a/website/docs/docs/cloud/connect-data-platform/about-connections.md b/website/docs/docs/cloud/connect-data-platform/about-connections.md index 1329d179900..93bbf83584f 100644 --- a/website/docs/docs/cloud/connect-data-platform/about-connections.md +++ b/website/docs/docs/cloud/connect-data-platform/about-connections.md @@ -3,7 +3,7 @@ title: "About data platform connections" id: about-connections description: "Information about data platform connections" sidebar_label: "About data platform connections" -pagination_next: "docs/cloud/connect-data-platform/connect-starburst-trino" +pagination_next: "docs/cloud/connect-data-platform/connect-microsoft-fabric" pagination_prev: null --- dbt Cloud can connect with a variety of data platform providers including: @@ -11,6 +11,7 @@ dbt Cloud can connect with a variety of data platform providers including: - [Apache Spark](/docs/cloud/connect-data-platform/connect-apache-spark) - [Databricks](/docs/cloud/connect-data-platform/connect-databricks) - [Google BigQuery](/docs/cloud/connect-data-platform/connect-bigquery) +- [Microsoft Fabric](/docs/cloud/connect-data-platform/connect-microsoft-fabric) - [PostgreSQL](/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb) - [Snowflake](/docs/cloud/connect-data-platform/connect-snowflake) - [Starburst or Trino](/docs/cloud/connect-data-platform/connect-starburst-trino) diff --git a/website/docs/docs/cloud/connect-data-platform/connect-microsoft-fabric.md b/website/docs/docs/cloud/connect-data-platform/connect-microsoft-fabric.md new file mode 100644 index 00000000000..e9d67524e89 --- /dev/null +++ b/website/docs/docs/cloud/connect-data-platform/connect-microsoft-fabric.md @@ -0,0 +1,43 @@ +--- +title: "Connect Microsoft Fabric" +description: "Configure Microsoft Fabric connection." +sidebar_label: "Connect Microsoft Fabric" +--- + +## Supported authentication methods +The supported authentication methods are: +- Azure Active Directory (Azure AD) service principal +- Azure AD password + +SQL password (LDAP) is not supported in Microsoft Fabric Synapse Data Warehouse so you must use Azure AD. This means that to use [Microsoft Fabric](https://www.microsoft.com/en-us/microsoft-fabric) in dbt Cloud, you will need at least one Azure AD service principal to connect dbt Cloud to Fabric, ideally one service principal for each user. + +### Active Directory service principal +The following are the required fields for setting up a connection with a Microsoft Fabric using Azure AD service principal authentication. + +| Field | Description | +| --- | --- | +| **Server** | The service principal's **host** value for the Fabric test endpoint. | +| **Port** | The port to connect to Microsoft Fabric. You can use `1433` (the default), which is the standard SQL server port number. | +| **Database** | The service principal's **database** value for the Fabric test endpoint. | +| **Authentication** | Choose **Service Principal** from the dropdown. | +| **Tenant ID** | The service principal's **Directory (tenant) ID**. | +| **Client ID** | The service principal's **application (client) ID id**. | +| **Client secret** | The service principal's **client secret** (not the **client secret id**). | + + +### Active Directory password + +The following are the required fields for setting up a connection with a Microsoft Fabric using Azure AD password authentication. + +| Field | Description | +| --- | --- | +| **Server** | The server hostname to connect to Microsoft Fabric. | +| **Port** | The server port. You can use `1433` (the default), which is the standard SQL server port number. | +| **Database** | The database name. | +| **Authentication** | Choose **Active Directory Password** from the dropdown. | +| **User** | The AD username. | +| **Password** | The AD username's password. | + +## Configuration + +To learn how to optimize performance with data platform-specific configurations in dbt Cloud, refer to [Microsoft Fabric DWH configurations](/reference/resource-configs/fabric-configs). diff --git a/website/docs/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb.md b/website/docs/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb.md index 486aa787936..06b9dd62f1a 100644 --- a/website/docs/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb.md +++ b/website/docs/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb.md @@ -13,10 +13,12 @@ The following fields are required when creating a Postgres, Redshift, or AlloyDB | Port | Usually 5432 (Postgres) or 5439 (Redshift) | `5439` | | Database | The logical database to connect to and run queries against. | `analytics` | -**Note**: When you set up a Redshift or Postgres connection in dbt Cloud, SSL-related parameters aren't available as inputs. +**Note**: When you set up a Redshift or Postgres connection in dbt Cloud, SSL-related parameters aren't available as inputs. +For dbt Cloud users, please log in using the default Database username and password. Note this is because [`IAM` authentication](https://docs.aws.amazon.com/redshift/latest/mgmt/generating-user-credentials.html) is not compatible with dbt Cloud. + ### Connecting via an SSH Tunnel To connect to a Postgres, Redshift, or AlloyDB instance via an SSH tunnel, select the **Use SSH Tunnel** option when creating your connection. When configuring the tunnel, you must supply the hostname, username, and port for the [bastion server](#about-the-bastion-server-in-aws). diff --git a/website/docs/docs/cloud/dbt-cloud-ide/dbt-cloud-ide.md b/website/docs/docs/cloud/dbt-cloud-ide/dbt-cloud-ide.md deleted file mode 100644 index 3c41432bc62..00000000000 --- a/website/docs/docs/cloud/dbt-cloud-ide/dbt-cloud-ide.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: "dbt Cloud IDE" -description: "Learn how to configure Git in dbt Cloud" -pagination_next: "docs/cloud/dbt-cloud-ide/develop-in-the-cloud" -pagination_prev: null ---- - -
- - - - - -
-
-
- - - - -
\ No newline at end of file diff --git a/website/docs/docs/cloud/manage-access/auth0-migration.md b/website/docs/docs/cloud/manage-access/auth0-migration.md index 0d7b715b6c6..a40bb006d06 100644 --- a/website/docs/docs/cloud/manage-access/auth0-migration.md +++ b/website/docs/docs/cloud/manage-access/auth0-migration.md @@ -4,11 +4,6 @@ id: "auth0-migration" sidebar: "SSO Auth0 Migration" description: "Required actions for migrating to Auth0 for SSO services on dbt Cloud." --- -:::warning Limited availability - -This is a new feature that is being implemented incrementally to customers using single sign-on features today. If you have any questions or concerns about the availability of the migration feature, please [contact support](mailto:support@getdbt.com). - -::: dbt Labs is partnering with Auth0 to bring enhanced features to dbt Cloud's single sign-on (SSO) capabilities. Auth0 is an identity and access management (IAM) platform with advanced security features, and it will be leveraged by dbt Cloud. These changes will require some action from customers with SSO configured in dbt Cloud today, and this guide will outline the necessary changes for each environment. diff --git a/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md b/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md index 24c64a5abed..63786f40bd8 100644 --- a/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md +++ b/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md @@ -43,7 +43,7 @@ If you're on an Enterprise plan and have the correct [permissions](/docs/cloud/m - To remove a user, go to **Account Settings**, select **Users** under **Teams**. Select the user you want to remove, click **Edit**, and then **Delete**. This action cannot be undone. However, you can re-invite the user with the same info if you deleted the user in error.
-- To add a user, go to **Account Settings**, select **Users** under **Teams**. Select **Invite Users**. For fine-grained permission configuration, refer to [Role based access control](/docs/cloud/manage-access/enterprise-permissions). +- To add a user, go to **Account Settings**, select **Users** under **Teams**. Select [**Invite Users**](docs/cloud/manage-access/invite-users). For fine-grained permission configuration, refer to [Role based access control](/docs/cloud/manage-access/enterprise-permissions). @@ -76,14 +76,7 @@ To add a user in dbt Cloud, you must be an account owner or have admin privilege -Now that you've updated your billing, you can now invite users to join your dbt Cloud account: - -4. In **Account Settings**, select **Users** under **Teams**. -5. Select the user you want to add by clicking **Invite Users**. -6. In the **Invite Users** side panel, add the invited user's email(s), assign their license, and Groups. -7. Click **Send Invitations** at the bottom of the page. - - +Now that you've updated your billing, you can now [invite users](/docs/cloud/manage-access/invite-users) to join your dbt Cloud account: Great work! After completing those these steps, your dbt Cloud user count and billing count should now be the same. diff --git a/website/docs/docs/cloud/manage-access/invite-users.md b/website/docs/docs/cloud/manage-access/invite-users.md new file mode 100644 index 00000000000..21be7010a30 --- /dev/null +++ b/website/docs/docs/cloud/manage-access/invite-users.md @@ -0,0 +1,76 @@ +--- +title: "Invite users to dbt Cloud" +description: "Learn how to manually invite users to dbt Cloud" +id: "invite-users" +sidebar: "Invite users" +--- + +dbt Cloud makes it easy to invite new users to your environment out of the box. This feature is available to all dbt Cloud customers on Teams or Enteprise plans (Developer plans are limited to a single user). + +## Prerequisites + +You must have proper permissions to invite new users: + +- [**Teams accounts**](/docs/cloud/manage-access/self-service-permissions) — must have `member` or `owner` permissions. +- [**Enterprise accounts**](/docs/cloud/manage-access/enterprise-permissions) — must have `admin`, `account admin`, `project creator`, or `security admin` permissions. +- The admin inviting the users must have a `developer` or `IT` license. + +## Invite new users + +1. In your dbt Cloud account, select the gear menu in the upper right corner and then select **Account Settings**. +2. From the left sidebar, select **Users**. + + + +3. Click on **Invite Users**. + + + +4. In the **Email Addresses** field, enter the email addresses of the users you would like to invite separated by comma, semicolon, or a new line. +5. Select the license type for the batch of users from the **License** dropdown. +6. Select the group(s) you would like the invitees to belong to. +7. Click **Send Invitations**. + - If the list of invitees exceeds the number of licenses your account has available, you will receive a warning when you click **Send Invitations** and the invitations will not be sent. + + +## User experience + +dbt Cloud generates and sends emails from `support@getdbt.com` to the specified addresses. Make sure traffic from the `support@getdbt.com` email is allowed in your settings to avoid emails from going to spam or being blocked. This is the originating email address for all [instances worldwide](/docs/cloud/about-cloud/regions-ip-addresses). + + +The email contains a link to create an account. When the user clicks on this they will be brought to one of two screens depending on whether SSO is configured or not. + + + + + + + +The default settings send the email, the user clicks the link, and is prompted to create their account: + + + + + + + +If SSO is configured for the environment, the user clicks the link, is brought to a confirmation screen, and presented with a link to authenticate against the company's identity provider: + + + + + + + + +Once the user completes this process, their email and user information will populate in the **Users** screen in dbt Cloud. + +## FAQ + +* Is there a limit to the number of users I can invite? _Your ability to invite users is limited to the number of licenses you have available._ +* Why are users are clicking the invitation link and getting an `Invalid Invitation Code` error? _We have seen scenarios where embedded secure link technology (such as enterprise Outlooks [Safe Link](https://learn.microsoft.com/en-us/microsoft-365/security/office-365-security/safe-links-about?view=o365-worldwide) feature) can result in errors when clicking on the email link. Be sure to include the `getdbt.com` URL in the allowlists for these services._ +* Can I have a mixure of users with SSO and username/password authentication? _Once SSO is enabled, you will no longer be able to add local users. If you have contractors or similar contingent workers, we recommend you add them to your SSO service._ +* What happens if I need to resend the invitation? _From the Users page, click on the invite record, and you will be presented with the option to resend the invitation._ +* What can I do if I entered an email address incorrectly? _From the Users page, click on the invite record, and you will be presented with the option to revoke it. Once revoked, generate a new invitation to the correct email address._ + + \ No newline at end of file diff --git a/website/docs/docs/collaborate/cloud-build-and-view-your-docs.md b/website/docs/docs/collaborate/cloud-build-and-view-your-docs.md index b387c64788f..e104ea8640c 100644 --- a/website/docs/docs/collaborate/cloud-build-and-view-your-docs.md +++ b/website/docs/docs/collaborate/cloud-build-and-view-your-docs.md @@ -5,7 +5,7 @@ description: "Automatically generate project documentation as you run jobs." pagination_next: null --- -dbt enables you to generate documentation for your project and data warehouse, and renders the documentation in a website. For more information, see [Documentation](/docs/collaborate/documentation). +dbt Cloud enables you to generate documentation for your project and data platform, rendering it as a website. The documentation is only updated with new information after a fully successful job run, ensuring accuracy and relevance. Refer to [Documentation](/docs/collaborate/documentation) for more details. ## Set up a documentation job @@ -52,13 +52,15 @@ You configure project documentation to generate documentation when the job you s To generate documentation in the dbt Cloud IDE, run the `dbt docs generate` command in the Command Bar in the dbt Cloud IDE. This command will generate the Docs for your dbt project as it exists in development in your IDE session. - + After generating your documentation, you can click the **Book** icon above the file tree, to see the latest version of your documentation rendered in a new browser window. ## Viewing documentation -Once you set up a job to generate documentation for your project, you can click **Documentation** in the top left. Your project's documentation should open. This link will always navigate you to the most recent version of your project's documentation in dbt Cloud. +Once you set up a job to generate documentation for your project, you can click **Documentation** in the top left. Your project's documentation should open. This link will always help you find the most recent version of your project's documentation in dbt Cloud. + +These generated docs always show the last fully successful run, which means that if you have any failed tasks, including tests, then you will not see changes to the docs by this run. If you don't see a fully successful run, then you won't see any changes to the documentation. The dbt Cloud IDE makes it possible to view [documentation](/docs/collaborate/documentation) for your dbt project while your code is still in development. With this workflow, you can inspect and verify what your project's generated documentation will look like before your changes are released to production. diff --git a/website/docs/docs/collaborate/explore-multiple-projects.md b/website/docs/docs/collaborate/explore-multiple-projects.md new file mode 100644 index 00000000000..3be35110a37 --- /dev/null +++ b/website/docs/docs/collaborate/explore-multiple-projects.md @@ -0,0 +1,46 @@ +--- +title: "Explore multiple projects" +sidebar_label: "Explore multiple projects" +description: "Learn about project-level lineage in dbt Explorer and its uses." +pagination_next: null +--- + +You can also view all the different projects and public models in the account, where the public models are defined, and how they are used to gain a better understanding about your cross-project resources. + +The resource-level lineage graph for a given project displays the cross-project relationships in the DAG. The different icons indicate whether you’re looking at an upstream producer project (parent) or a downstream consumer project (child). + +When you view an upstream (parent) project, its public models display a counter icon in the upper right corner indicating how many downstream (child) projects depend on them. Selecting a model reveals the lineage indicating the projects dependent on that model. These counts include all projects listing the upstream one as a dependency in its `dependencies.yml`, even without a direct `{{ ref() }}`. Selecting a project node from a public model opens its detailed lineage graph, which is subject to your [permission](/docs/cloud/manage-access/enterprise-permissions). + + + +When viewing a downstream (child) project that imports and refs public models from upstream (parent) projects, public models will show up in the lineage graph and display an icon on the graph edge that indicates what the relationship is to a model from another project. Hovering over this icon indicates the specific dbt Cloud project that produces that model. Double-clicking on a model from another project opens the resource-level lineage graph of the parent project, which is subject to your permissions. + + + + +## Explore the project-level lineage graph + +For cross-project collaboration, you can interact with the DAG in all the same ways as described in [Explore your project's lineage](/docs/collaborate/explore-projects#project-lineage) but you can also interact with it at the project level and view the details. + +To get a list view of all the projects, select the account name at the top of the **Explore** page near the navigation bar. This view includes a public model list, project list, and a search bar for project searches. You can also view the project-level lineage graph by clicking the Lineage view icon in the page's upper right corner. + +If you have permissions for a project in the account, you can view all public models used across the entire account. However, you can only view full public model details and private models if you have permissions for a project where the models are defined. + +From the project-level lineage graph, you can: + +- Click the Lineage view icon (in the graph’s upper right corner) to view the cross-project lineage graph. +- Click the List view icon (in the graph’s upper right corner) to view the project list. + - Select a project from the **Projects** tab to switch to that project’s main **Explore** page. + - Select a model from the **Public Models** tab to view the [model’s details page](/docs/collaborate/explore-projects#view-resource-details). + - Perform searches on your projects with the search bar. +- Select a project node in the graph (double-clicking) to switch to that particular project’s lineage graph. + +When you select a project node in the graph, a project details panel opens on the graph’s right-hand side where you can: + +- View counts of the resources defined in the project. +- View a list of its public models, if any. +- View a list of other projects that uses the project, if any. +- Click **Open Project Lineage** to switch to the project’s lineage graph. +- Click the Share icon to copy the project panel link to your clipboard so you can share the graph with someone. + + \ No newline at end of file diff --git a/website/docs/docs/collaborate/explore-projects.md b/website/docs/docs/collaborate/explore-projects.md index 282ef566356..05326016fab 100644 --- a/website/docs/docs/collaborate/explore-projects.md +++ b/website/docs/docs/collaborate/explore-projects.md @@ -2,7 +2,7 @@ title: "Explore your dbt projects" sidebar_label: "Explore dbt projects" description: "Learn about dbt Explorer and how to interact with it to understand, improve, and leverage your data pipelines." -pagination_next: null +pagination_next: "docs/collaborate/explore-multiple-projects" pagination_prev: null --- @@ -53,17 +53,23 @@ To interact with the full lineage graph, you can: - Hover over any item in the graph to display the resource’s name and type. - Zoom in and out on the graph by mouse-scrolling. - Grab and move the graph and the nodes. +- Right click on a node (context menu) to: + - Refocus on the node, including its parent and child nodes + - Refocus on the node and its children only + - Refocus on the node and it parents only + - View the node's [resource details](#view-resource-details) page + - Select a resource to highlight its relationship with other resources in your project. A panel opens on the graph’s right-hand side that displays a high-level summary of the resource’s details. The side panel includes a **General** tab for information like description, materialized type, and other details. - Click the Share icon in the side panel to copy the graph’s link to your clipboard. - Click the View Resource icon in the side panel to [view the resource details](#view-resource-details). -- [Search and select specific resources](#search-resources) or a subset of the DAG using selectors and graph operators. For example: +- [Search and select specific resources](#search-resources) or a subset of the DAG using [selectors](/reference/node-selection/methods) and [graph operators](/reference/node-selection/graph-operators). This can help you narrow the focus on the resources that interest you. For example: - `+[RESOURCE_NAME]` — Displays all parent nodes of the resource - `resource_type:model [RESOURCE_NAME]` — Displays all models matching the name search - [View resource details](#view-resource-details) by selecting a node (double-clicking) in the graph. - Click the List view icon in the graph's upper right corner to return to the main **Explore** page. - + ## Search for resources {#search-resources} @@ -74,9 +80,15 @@ Select a node (single-click) in the lineage graph to highlight its relationship ### Search with keywords When searching with keywords, dbt Explorer searches through your resource metadata (such as resource type, resource name, column name, source name, tags, schema, database, version, alias/identifier, and package name) and returns any matches. -### Search with selector methods +- Keyword search features a side panel (to the right of the main section) to filter search results by resource type. +- Use this panel to select specific resource tags or model access levels under the **Models** option. + - For example, a search for "sale" returns results that include all resources with the keyword "sale" in their metadata. Filtering by **Models** and **Sources** refines these results to only include models or sources. + +- When searching for an exact column name, the results show all relational nodes containing that column in their schemas. If there's a match, a notice in the search result indicates the resource contains the specified column. + +### Search with selectors -You can search with [selector methods](/reference/node-selection/methods). Below are the selectors currently available in dbt Explorer: +You can search with [selectors](/reference/node-selection/methods). Below are the selectors currently available in dbt Explorer: - `fqn:` — Find resources by [file or fully qualified name](/reference/node-selection/methods#the-fqn-method). This selector is the search bar's default. If you want to use the default, it's unnecessary to add `fqn:` before the search term. - `source:` — Find resources by a specified [source](/reference/node-selection/methods#the-source-method). @@ -91,23 +103,15 @@ You can search with [selector methods](/reference/node-selection/methods). Below -### Search with graph operators +Because the results of selectors are immutable, the filter side panel is not available with this search method. -You can use [graph operators](/reference/node-selection/graph-operators) on keywords or selector methods. For example, `+orders` returns all the parents of `orders`. - -### Search with set operators +When searching with selector methods, you can also use [graph operators](/reference/node-selection/graph-operators). For example, `+orders` returns all the parents of `orders`. This functionality is not available for keyword search. You can use multiple selector methods in your search query with [set operators](/reference/node-selection/set-operators). A space implies a union set operator and a comma for an intersection. For example: - `resource_type:metric,tag:nightly` — Returns metrics with the tag `nightly` - `+snowplow_sessions +fct_orders` — Returns resources that are parent nodes of either `snowplow_sessions` or `fct_orders` -### Search with both keywords and selector methods - -You can use keyword search to highlight results that are filtered by the selector search. For example, if you don't have a resource called `customers`, then `resource_type:metric customers` returns all the metrics in your project and highlights those that are related to the term `customers` in the name, in a column, tagged as customers, and so on. - -When searching in this way, the selectors behave as filters that you can use to narrow the search and keywords as a way to find matches within those filtered results. - - + ## Browse with the sidebar @@ -120,7 +124,7 @@ To browse using a different view, you can choose one of these options from the * - **File Tree** — All resources in the project organized by the file in which they are defined. This mirrors the file tree in your dbt project repository. - **Database** — All resources in the project organized by the database and schema in which they are built. This mirrors your data platform's structure that represents the [applied state](/docs/dbt-cloud-apis/project-state) of your project. - + ## View model versions @@ -132,7 +136,7 @@ You can view the definition and latest run results of any resource in your proje The details (metadata) available to you depends on the resource’s type, its definition, and the [commands](/docs/deploy/job-commands) that run within jobs in the production environment. - + ### Example of model details @@ -143,11 +147,11 @@ An example of the details you might get for a model: - **Lineage** graph — The model’s lineage graph that you can interact with. The graph includes one parent node and one child node from the model. Click the Expand icon in the graph's upper right corner to view the model in full lineage graph mode. - **Description** section — A [description of the model](/docs/collaborate/documentation#adding-descriptions-to-your-project). - **Recent** section — Information on the last time the model ran, how long it ran for, whether the run was successful, the job ID, and the run ID. - - **Tests** section — [Tests](/docs/build/tests) for the model. + - **Tests** section — [Tests](/docs/build/tests) for the model, including a status indicator for the latest test status. A :white_check_mark: denotes a passing test. - **Details** section — Key properties like the model’s relation name (for example, how it’s represented and how you can query it in the data platform: `database.schema.identifier`); model governance attributes like access, group, and if contracted; and more. - **Relationships** section — The nodes the model **Depends On**, is **Referenced by**, and (if applicable) is **Used by** for projects that have declared the models' project as a dependency. - **Code** tab — The source code and compiled code for the model. -- **Columns** tab — The available columns in the model. This tab also shows tests results (if any) that you can select to view the test's details page. A :white_check_mark: denotes a passing test. +- **Columns** tab — The available columns in the model. This tab also shows tests results (if any) that you can select to view the test's details page. A :white_check_mark: denotes a passing test. To filter the columns in the resource, you can use the search bar that's located at the top of the columns view. ### Example of exposure details @@ -189,47 +193,6 @@ An example of the details you might get for each source table within a source co - **Relationships** section — A table that lists all the sources used with their freshness status, the timestamp of when freshness was last checked, and the timestamp of when the source was last loaded. - **Columns** tab — The available columns in the source. This tab also shows tests results (if any) that you can select to view the test's details page. A :white_check_mark: denotes a passing test. -## About project-level lineage -You can also view all the different projects and public models in the account, where the public models are defined, and how they are used to gain a better understanding about your cross-project resources. - -When viewing the resource-level lineage graph for a given project that uses cross-project references, you can see cross-project relationships represented in the DAG. The iconography is slightly different depending on whether you're viewing the lineage of an upstream producer project or a downstream consumer project. - -When viewing an upstream (parent) project that produces public models that are imported by downstream (child) projects, public models will have a counter icon in their upper right corner that indicates the number of projects that declare the current project as a dependency. Selecting that model reveals the lineage to show the specific projects that are dependent on this model. Projects show up in this counter if they declare the parent project as a dependency in its `dependencies.yml` regardless of whether or not there's a direct `{{ ref() }}` against the public model. Selecting a project node from a public model opens the resource-level lineage graph for that project, which is subject to your permissions. - - - -When viewing a downstream (child) project that imports and refs public models from upstream (parent) projects, public models will show up in the lineage graph and display an icon on the graph edge that indicates what the relationship is to a model from another project. Hovering over this icon indicates the specific dbt Cloud project that produces that model. Double-clicking on a model from another project opens the resource-level lineage graph of the parent project, which is subject to your permissions. - - - - -### Explore the project-level lineage graph - -For cross-project collaboration, you can interact with the DAG in all the same ways as described in [Explore your project's lineage](#project-lineage) but you can also interact with it at the project level and view the details. - -To get a list view of all the projects, select the account name at the top of the **Explore** page near the navigation bar. This view includes a public model list, project list, and a search bar for project searches. You can also view the project-level lineage graph by clicking the Lineage view icon in the page's upper right corner. - -If you have permissions for a project in the account, you can view all public models used across the entire account. However, you can only view full public model details and private models if you have permissions for a project where the models are defined. - -From the project-level lineage graph, you can: - -- Click the Lineage view icon (in the graph’s upper right corner) to view the cross-project lineage graph. -- Click the List view icon (in the graph’s upper right corner) to view the project list. - - Select a project from the **Projects** tab to switch to that project’s main **Explore** page. - - Select a model from the **Public Models** tab to view the [model’s details page](#view-resource-details). - - Perform searches on your projects with the search bar. -- Select a project node in the graph (double-clicking) to switch to that particular project’s lineage graph. - -When you select a project node in the graph, a project details panel opens on the graph’s right-hand side where you can: - -- View counts of the resources defined in the project. -- View a list of its public models, if any. -- View a list of other projects that uses the project, if any. -- Click **Open Project Lineage** to switch to the project’s lineage graph. -- Click the Share icon to copy the project panel link to your clipboard so you can share the graph with someone. - - - ## Related content - [Enterprise permissions](/docs/cloud/manage-access/enterprise-permissions) - [About model governance](/docs/collaborate/govern/about-model-governance) diff --git a/website/docs/docs/collaborate/govern/model-contracts.md b/website/docs/docs/collaborate/govern/model-contracts.md index 442a20df1b6..342d86c1a77 100644 --- a/website/docs/docs/collaborate/govern/model-contracts.md +++ b/website/docs/docs/collaborate/govern/model-contracts.md @@ -125,8 +125,8 @@ Select the adapter-specific tab for more information on [constraint](/reference/ | Constraint type | Support | Platform enforcement | |:-----------------|:-------------|:---------------------| | not_null | ✅ Supported | ✅ Enforced | -| primary_key | ✅ Supported | ✅ Enforced | -| foreign_key | ✅ Supported | ✅ Enforced | +| primary_key | ✅ Supported | ❌ Not enforced | +| foreign_key | ✅ Supported | ❌ Not enforced | | unique | ❌ Not supported | ❌ Not enforced | | check | ❌ Not supported | ❌ Not enforced | diff --git a/website/docs/docs/collaborate/govern/model-versions.md b/website/docs/docs/collaborate/govern/model-versions.md index 49ed65f9a36..2a79e2f46e7 100644 --- a/website/docs/docs/collaborate/govern/model-versions.md +++ b/website/docs/docs/collaborate/govern/model-versions.md @@ -393,6 +393,32 @@ dbt.exceptions.AmbiguousAliasError: Compilation Error We opted to use `generate_alias_name` for this functionality so that the logic remains accessible to end users, and could be reimplemented with custom logic. ::: +### Run a model with multiple versions + +To run a model with multiple versions, you can use the [`--select` flag](/reference/node-selection/syntax). For example: + +- Run all versions of `dim_customers`: + + ```bash + dbt run --select dim_customers # Run all versions of the model + ``` +- Run only version 2 of `dim_customers`: + + You can use either of the following commands (both achieve the same result): + + ```bash + dbt run --select dim_customers.v2 # Run a specific version of the model + dbt run --select dim_customers_v2 # Alternative syntax for the specific version + ``` + +- Run the latest version of `dim_customers` using the `--select` flag shorthand: + + ```bash + dbt run -s dim_customers version:latest # Run the latest version of the model + ``` + +These commands provide flexibility in managing and executing different versions of a dbt model. + ### Optimizing model versions How you define each model version is completely up to you. While it's easy to start by copy-pasting from one model's SQL definition into another, you should think about _what actually is changing_ from one version to another. diff --git a/website/docs/docs/collaborate/govern/project-dependencies.md b/website/docs/docs/collaborate/govern/project-dependencies.md index 174e4572890..569d69a87e6 100644 --- a/website/docs/docs/collaborate/govern/project-dependencies.md +++ b/website/docs/docs/collaborate/govern/project-dependencies.md @@ -22,8 +22,12 @@ This year, dbt Labs is introducing an expanded notion of `dependencies` across m - **Packages** — Familiar and pre-existing type of dependency. You take this dependency by installing the package's full source code (like a software library). - **Projects** — A _new_ way to take a dependency on another project. Using a metadata service that runs behind the scenes, dbt Cloud resolves references on-the-fly to public models defined in other projects. You don't need to parse or run those upstream models yourself. Instead, you treat your dependency on those models as an API that returns a dataset. The maintainer of the public model is responsible for guaranteeing its quality and stability. +import UseCaseInfo from '/snippets/_packages_or_dependencies.md'; + + + +Refer to the [FAQs](#faqs) for more info. -Starting in dbt v1.6 or higher, `packages.yml` has been renamed to `dependencies.yml`. However, if you need use Jinja within your packages config, such as an environment variable for your private package, you need to keep using `packages.yml` for your packages for now. Refer to the [FAQs](#faqs) for more info. ## Prerequisites @@ -33,22 +37,6 @@ In order to add project dependencies and resolve cross-project `ref`, you must: - Have a successful run of the upstream ("producer") project - Have a multi-tenant or single-tenant [dbt Cloud Enterprise](https://www.getdbt.com/pricing) account (Azure ST is not supported but coming soon) - ## Example As an example, let's say you work on the Marketing team at the Jaffle Shop. The name of your team's project is `jaffle_marketing`: diff --git a/website/docs/docs/connect-adapters.md b/website/docs/docs/connect-adapters.md index e301cfc237e..56ff538dc9b 100644 --- a/website/docs/docs/connect-adapters.md +++ b/website/docs/docs/connect-adapters.md @@ -15,7 +15,7 @@ Explore the fastest and most reliable way to deploy dbt using dbt Cloud, a hoste Install dbt Core, an open-source tool, locally using the command line. dbt communicates with a number of different data platforms by using a dedicated adapter plugin for each. When you install dbt Core, you'll also need to install the specific adapter for your database, [connect to dbt Core](/docs/core/about-core-setup), and set up a `profiles.yml` file. -With a few exceptions [^1], you can install all [Verified adapters](/docs/supported-data-platforms) from PyPI using `pip install adapter-name`. For example to install Snowflake, use the command `pip install dbt-snowflake`. The installation will include `dbt-core` and any other required dependencies, which may include both other dependencies and even other adapter plugins. Read more about [installing dbt](/docs/core/installation). +With a few exceptions [^1], you can install all [Verified adapters](/docs/supported-data-platforms) from PyPI using `python -m pip install adapter-name`. For example to install Snowflake, use the command `python -m pip install dbt-snowflake`. The installation will include `dbt-core` and any other required dependencies, which may include both other dependencies and even other adapter plugins. Read more about [installing dbt](/docs/core/installation-overview). [^1]: Here are the two different adapters. Use the PyPI package name when installing with `pip` diff --git a/website/docs/docs/core/about-core-setup.md b/website/docs/docs/core/about-core-setup.md index 64e7694b793..8b170ba70d4 100644 --- a/website/docs/docs/core/about-core-setup.md +++ b/website/docs/docs/core/about-core-setup.md @@ -3,7 +3,7 @@ title: About dbt Core setup id: about-core-setup description: "Configuration settings for dbt Core." sidebar_label: "About dbt Core setup" -pagination_next: "docs/core/about-dbt-core" +pagination_next: "docs/core/dbt-core-environments" pagination_prev: null --- @@ -11,9 +11,10 @@ dbt Core is an [open-source](https://github.com/dbt-labs/dbt-core) tool that ena This section of our docs will guide you through various settings to get started: -- [About dbt Core](/docs/core/about-dbt-core) -- [Installing dbt](/docs/core/installation) - [Connecting to a data platform](/docs/core/connect-data-platform/profiles.yml) - [How to run your dbt projects](/docs/running-a-dbt-project/run-your-dbt-projects) +To learn about developing dbt projects in dbt Cloud, refer to [Develop with dbt Cloud](/docs/cloud/about-develop-dbt). + - dbt Cloud provides a command line interface with the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation). Both dbt Core and the dbt Cloud CLI are command line tools that let you run dbt commands. The key distinction is the dbt Cloud CLI is tailored for dbt Cloud's infrastructure and integrates with all its [features](/docs/cloud/about-cloud/dbt-cloud-features). + If you need a more detailed first-time setup guide for specific data platforms, read our [quickstart guides](https://docs.getdbt.com/guides). diff --git a/website/docs/docs/core/about-dbt-core.md b/website/docs/docs/core/about-dbt-core.md deleted file mode 100644 index a35d92420f3..00000000000 --- a/website/docs/docs/core/about-dbt-core.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -title: "About dbt Core" -id: "about-dbt-core" -sidebar_label: "About dbt Core" ---- - -[dbt Core](https://github.com/dbt-labs/dbt-core) is an open sourced project where you can develop from the command line and run your dbt project. - -To use dbt Core, your workflow generally looks like: - -1. **Build your dbt project in a code editor —** popular choices include VSCode and Atom. - -2. **Run your project from the command line —** macOS ships with a default Terminal program, however you can also use iTerm or the command line prompt within a code editor to execute dbt commands. - -:::info How we set up our computers for working on dbt projects - -We've written a [guide](https://discourse.getdbt.com/t/how-we-set-up-our-computers-for-working-on-dbt-projects/243) for our recommended setup when running dbt projects using dbt Core. - -::: - -If you're using the command line, we recommend learning some basics of your terminal to help you work more effectively. In particular, it's important to understand `cd`, `ls` and `pwd` to be able to navigate through the directory structure of your computer easily. - -You can find more information on installing and setting up the dbt Core [here](/docs/core/installation). - -**Note** — dbt supports a dbt Cloud CLI and dbt Core, both command line interface tools that enable you to run dbt commands. The key distinction is the dbt Cloud CLI is tailored for dbt Cloud's infrastructure and integrates with all its [features](/docs/cloud/about-cloud/dbt-cloud-features). diff --git a/website/docs/docs/core/connect-data-platform/about-core-connections.md b/website/docs/docs/core/connect-data-platform/about-core-connections.md index 492e5ae878a..61a7805d232 100644 --- a/website/docs/docs/core/connect-data-platform/about-core-connections.md +++ b/website/docs/docs/core/connect-data-platform/about-core-connections.md @@ -14,6 +14,7 @@ dbt Core can connect with a variety of data platform providers including: - [Apache Spark](/docs/core/connect-data-platform/spark-setup) - [Databricks](/docs/core/connect-data-platform/databricks-setup) - [Google BigQuery](/docs/core/connect-data-platform/bigquery-setup) +- [Microsoft Fabric](/docs/core/connect-data-platform/fabric-setup) - [PostgreSQL](/docs/core/connect-data-platform/postgres-setup) - [Snowflake](/docs/core/connect-data-platform/snowflake-setup) - [Starburst or Trino](/docs/core/connect-data-platform/trino-setup) diff --git a/website/docs/docs/core/connect-data-platform/alloydb-setup.md b/website/docs/docs/core/connect-data-platform/alloydb-setup.md index c01ba06d887..cbfecb48169 100644 --- a/website/docs/docs/core/connect-data-platform/alloydb-setup.md +++ b/website/docs/docs/core/connect-data-platform/alloydb-setup.md @@ -14,18 +14,10 @@ meta: config_page: '/reference/resource-configs/postgres-configs' --- -## Overview of AlloyDB support +import SetUpPages from '/snippets/_setup-pages-intro.md'; + + -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
## Profile Configuration diff --git a/website/docs/docs/core/connect-data-platform/athena-setup.md b/website/docs/docs/core/connect-data-platform/athena-setup.md index db218110dc1..468ba7a7847 100644 --- a/website/docs/docs/core/connect-data-platform/athena-setup.md +++ b/website/docs/docs/core/connect-data-platform/athena-setup.md @@ -15,32 +15,11 @@ meta: config_page: '/reference/resource-configs/no-configs' --- -

Overview of {frontMatter.meta.pypi_package}

+ -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
+import SetUpPages from '/snippets/_setup-pages-intro.md'; -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+ ## Connecting to Athena with dbt-athena diff --git a/website/docs/docs/core/connect-data-platform/azuresynapse-setup.md b/website/docs/docs/core/connect-data-platform/azuresynapse-setup.md index 073e95530c1..8a4d6b61004 100644 --- a/website/docs/docs/core/connect-data-platform/azuresynapse-setup.md +++ b/website/docs/docs/core/connect-data-platform/azuresynapse-setup.md @@ -24,32 +24,11 @@ Refer to [Microsoft Fabric Synapse Data Warehouse](/docs/core/connect-data-platf ::: -

Overview of {frontMatter.meta.pypi_package}

+ -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
+import SetUpPages from '/snippets/_setup-pages-intro.md'; -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+ :::info Dedicated SQL only diff --git a/website/docs/docs/core/connect-data-platform/bigquery-setup.md b/website/docs/docs/core/connect-data-platform/bigquery-setup.md index 96eafadea3b..8238bc043c4 100644 --- a/website/docs/docs/core/connect-data-platform/bigquery-setup.md +++ b/website/docs/docs/core/connect-data-platform/bigquery-setup.md @@ -18,33 +18,9 @@ meta: -

Overview of {frontMatter.meta.pypi_package}

- -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; + ## Authentication Methods diff --git a/website/docs/docs/core/connect-data-platform/clickhouse-setup.md b/website/docs/docs/core/connect-data-platform/clickhouse-setup.md index fb0965398a2..fce367be812 100644 --- a/website/docs/docs/core/connect-data-platform/clickhouse-setup.md +++ b/website/docs/docs/core/connect-data-platform/clickhouse-setup.md @@ -17,34 +17,9 @@ meta: Some core functionality may be limited. If you're interested in contributing, check out the source code for each repository listed below. +import SetUpPages from '/snippets/_setup-pages-intro.md'; -

Overview of {frontMatter.meta.pypi_package}

- -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+ ## Connecting to ClickHouse with **dbt-clickhouse** diff --git a/website/docs/docs/core/connect-data-platform/databend-setup.md b/website/docs/docs/core/connect-data-platform/databend-setup.md index daccd14f6c3..5442327fb27 100644 --- a/website/docs/docs/core/connect-data-platform/databend-setup.md +++ b/website/docs/docs/core/connect-data-platform/databend-setup.md @@ -22,34 +22,9 @@ If you're interested in contributing, check out the source code repository liste ::: -

Overview of {frontMatter.meta.pypi_package}

- -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; + ## Connecting to Databend Cloud with **dbt-databend-cloud** diff --git a/website/docs/docs/core/connect-data-platform/databricks-setup.md b/website/docs/docs/core/connect-data-platform/databricks-setup.md index caf52d09de3..1ea6afda370 100644 --- a/website/docs/docs/core/connect-data-platform/databricks-setup.md +++ b/website/docs/docs/core/connect-data-platform/databricks-setup.md @@ -18,34 +18,11 @@ meta: -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
+ -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

- `dbt-databricks` is the recommended adapter for Databricks. It includes features not available in `dbt-spark`, such as: - Unity Catalog support - No need to install additional drivers or dependencies for use on the CLI diff --git a/website/docs/docs/core/connect-data-platform/decodable-setup.md b/website/docs/docs/core/connect-data-platform/decodable-setup.md index b43521732d4..6c3cb487885 100644 --- a/website/docs/docs/core/connect-data-platform/decodable-setup.md +++ b/website/docs/docs/core/connect-data-platform/decodable-setup.md @@ -21,35 +21,9 @@ meta: Some core functionality may be limited. If you're interested in contributing, see the source code for the repository listed below. ::: -

Overview of {frontMatter.meta.pypi_package}

- -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version}
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- - -

Installing {frontMatter.meta.pypi_package}

- -dbt-decodable is also available on PyPI. pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -
-

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration.

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; + ## Connecting to Decodable with **dbt-decodable** Do the following steps to connect to Decodable with dbt. diff --git a/website/docs/docs/core/connect-data-platform/doris-setup.md b/website/docs/docs/core/connect-data-platform/doris-setup.md index a7e2ba1ba3e..a3e5364d907 100644 --- a/website/docs/docs/core/connect-data-platform/doris-setup.md +++ b/website/docs/docs/core/connect-data-platform/doris-setup.md @@ -4,8 +4,8 @@ description: "Read this guide to learn about the Doris warehouse setup in dbt." id: "doris-setup" meta: maintained_by: SelectDB - authors: long2ice,catpineapple - github_repo: 'selectdb/dbt-selectdb' + authors: catpineapple,JNSimba + github_repo: 'selectdb/dbt-doris' pypi_package: 'dbt-doris' min_core_version: 'v1.3.0' cloud_support: Not Supported @@ -15,33 +15,9 @@ meta: config_page: '/reference/resource-configs/doris-configs' --- -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+ ## Connecting to Doris/SelectDB with **dbt-doris** diff --git a/website/docs/docs/core/connect-data-platform/dremio-setup.md b/website/docs/docs/core/connect-data-platform/dremio-setup.md index fa6ca154fcd..839dd8cffa8 100644 --- a/website/docs/docs/core/connect-data-platform/dremio-setup.md +++ b/website/docs/docs/core/connect-data-platform/dremio-setup.md @@ -21,33 +21,9 @@ Some core functionality may be limited. If you're interested in contributing, ch ::: -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+ Follow the repository's link for OS dependencies. @@ -62,7 +38,6 @@ Before connecting from project to Dremio Cloud, follow these prerequisite steps: * Ensure that Python 3.9.x or later is installed on the system that you are running dbt on. - ## Prerequisites for Dremio Software * Ensure that you are using version 22.0 or later. diff --git a/website/docs/docs/core/connect-data-platform/duckdb-setup.md b/website/docs/docs/core/connect-data-platform/duckdb-setup.md index a3fee5a5164..6e118e54061 100644 --- a/website/docs/docs/core/connect-data-platform/duckdb-setup.md +++ b/website/docs/docs/core/connect-data-platform/duckdb-setup.md @@ -21,33 +21,9 @@ Some core functionality may be limited. If you're interested in contributing, ch ::: -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+ ## Connecting to DuckDB with dbt-duckdb diff --git a/website/docs/docs/core/connect-data-platform/exasol-setup.md b/website/docs/docs/core/connect-data-platform/exasol-setup.md index 2bf4cd7ffac..509ccd67e84 100644 --- a/website/docs/docs/core/connect-data-platform/exasol-setup.md +++ b/website/docs/docs/core/connect-data-platform/exasol-setup.md @@ -21,34 +21,9 @@ Some core functionality may be limited. If you're interested in contributing, ch ::: -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

- dbt-exasol + ### Connecting to Exasol with **dbt-exasol** diff --git a/website/docs/docs/core/connect-data-platform/fabric-setup.md b/website/docs/docs/core/connect-data-platform/fabric-setup.md index aa7784d96ec..deef1e04b22 100644 --- a/website/docs/docs/core/connect-data-platform/fabric-setup.md +++ b/website/docs/docs/core/connect-data-platform/fabric-setup.md @@ -4,48 +4,27 @@ description: "Read this guide to learn about the Microsoft Fabric Synapse Data W id: fabric-setup meta: maintained_by: Microsoft - authors: '[Microsoft](https://github.com/Microsoft)' + authors: 'Microsoft' github_repo: 'Microsoft/dbt-fabric' pypi_package: 'dbt-fabric' min_core_version: '1.4.0' - cloud_support: Not Supported + cloud_support: Supported platform_name: 'Microsoft Fabric' config_page: '/reference/resource-configs/fabric-configs' --- :::info -Below is a guide for use with "Synapse Data Warehouse" a new product within Microsoft Fabric (preview) ([more info](https://learn.microsoft.com/en-us/fabric/data-warehouse/data-warehousing#synapse-data-warehouse)) +Below is a guide for use with [Synapse Data Warehouse](https://learn.microsoft.com/en-us/fabric/data-warehouse/data-warehousing#synapse-data-warehouse), a new product within Microsoft Fabric. -To learn how to set up dbt with Azure Synapse Dedicated Pools, see [Microsoft Azure Synapse DWH setup](/docs/core/connect-data-platform/azuresynapse-setup) +To learn how to set up dbt with Azure Synapse Dedicated Pools, refer to [Microsoft Azure Synapse DWH setup](/docs/core/connect-data-platform/azuresynapse-setup). ::: -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
+ -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

### Prerequisites diff --git a/website/docs/docs/core/connect-data-platform/fal-setup.md b/website/docs/docs/core/connect-data-platform/fal-setup.md index ef4998e8c1b..76539d67c54 100644 --- a/website/docs/docs/core/connect-data-platform/fal-setup.md +++ b/website/docs/docs/core/connect-data-platform/fal-setup.md @@ -21,36 +21,11 @@ Some core functionality may be limited. If you're interested in contributing, ch ::: -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
+ -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package}[<sql-adapter>] - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

You must install the adapter for SQL transformations and data storage independently from dbt-fal.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

- ## Setting up fal with other adapter diff --git a/website/docs/docs/core/connect-data-platform/firebolt-setup.md b/website/docs/docs/core/connect-data-platform/firebolt-setup.md index c7a5a543512..8fb91dea299 100644 --- a/website/docs/docs/core/connect-data-platform/firebolt-setup.md +++ b/website/docs/docs/core/connect-data-platform/firebolt-setup.md @@ -19,34 +19,11 @@ meta: Some core functionality may be limited. If you're interested in contributing, check out the source code for the repository listed below. -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
+ -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

- For other information including Firebolt feature support, see the [GitHub README](https://github.com/firebolt-db/dbt-firebolt/blob/main/README.md) and the [changelog](https://github.com/firebolt-db/dbt-firebolt/blob/main/CHANGELOG.md). diff --git a/website/docs/docs/core/connect-data-platform/glue-setup.md b/website/docs/docs/core/connect-data-platform/glue-setup.md index e56e5bcd902..afb95fe6af5 100644 --- a/website/docs/docs/core/connect-data-platform/glue-setup.md +++ b/website/docs/docs/core/connect-data-platform/glue-setup.md @@ -22,34 +22,11 @@ Some core functionality may be limited. If you're interested in contributing, ch ::: -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
+ -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

- For further (and more likely up-to-date) info, see the [README](https://github.com/aws-samples/dbt-glue#readme) diff --git a/website/docs/docs/core/connect-data-platform/greenplum-setup.md b/website/docs/docs/core/connect-data-platform/greenplum-setup.md index 06ada19a1e9..523a503b128 100644 --- a/website/docs/docs/core/connect-data-platform/greenplum-setup.md +++ b/website/docs/docs/core/connect-data-platform/greenplum-setup.md @@ -16,34 +16,11 @@ meta: config_page: '/reference/resource-configs/greenplum-configs' --- -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
+ -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

- For further (and more likely up-to-date) info, see the [README](https://github.com/markporoshin/dbt-greenplum#README.md) diff --git a/website/docs/docs/core/connect-data-platform/hive-setup.md b/website/docs/docs/core/connect-data-platform/hive-setup.md index 61a929c58da..33e45e28a0d 100644 --- a/website/docs/docs/core/connect-data-platform/hive-setup.md +++ b/website/docs/docs/core/connect-data-platform/hive-setup.md @@ -16,34 +16,11 @@ meta: config_page: '/reference/resource-configs/hive-configs' --- -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
+ -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

- ## Connection Methods @@ -154,7 +131,7 @@ you must install the `dbt-hive` plugin. The following commands will install the latest version of `dbt-hive` as well as the requisite version of `dbt-core` and `impyla` driver used for connections. ``` -pip install dbt-hive +python -m pip install dbt-hive ``` ### Supported Functionality diff --git a/website/docs/docs/core/connect-data-platform/ibmdb2-setup.md b/website/docs/docs/core/connect-data-platform/ibmdb2-setup.md index cb6c7459418..692342466b0 100644 --- a/website/docs/docs/core/connect-data-platform/ibmdb2-setup.md +++ b/website/docs/docs/core/connect-data-platform/ibmdb2-setup.md @@ -22,34 +22,11 @@ Some core functionality may be limited. If you're interested in contributing, ch ::: -## Overview of dbt-ibmdb2 +import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
+ -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

- This is an experimental plugin: - We have not tested it extensively diff --git a/website/docs/docs/core/connect-data-platform/impala-setup.md b/website/docs/docs/core/connect-data-platform/impala-setup.md index 0a0f1b955a1..df82cab6563 100644 --- a/website/docs/docs/core/connect-data-platform/impala-setup.md +++ b/website/docs/docs/core/connect-data-platform/impala-setup.md @@ -16,33 +16,9 @@ meta: config_page: '/reference/resource-configs/impala-configs' --- -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+ ## Connection Methods diff --git a/website/docs/docs/core/connect-data-platform/infer-setup.md b/website/docs/docs/core/connect-data-platform/infer-setup.md index 430c5e47f85..7642c553cc4 100644 --- a/website/docs/docs/core/connect-data-platform/infer-setup.md +++ b/website/docs/docs/core/connect-data-platform/infer-setup.md @@ -16,32 +16,11 @@ meta: min_supported_version: n/a --- -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
+ -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

- ## Connecting to Infer with **dbt-infer** diff --git a/website/docs/docs/core/connect-data-platform/iomete-setup.md b/website/docs/docs/core/connect-data-platform/iomete-setup.md index bc015141c85..2f2d18b1e47 100644 --- a/website/docs/docs/core/connect-data-platform/iomete-setup.md +++ b/website/docs/docs/core/connect-data-platform/iomete-setup.md @@ -16,35 +16,10 @@ meta: config_page: '/reference/resource-configs/no-configs' --- -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
+ -## Installation and Distribution - - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

Set up a iomete Target diff --git a/website/docs/docs/core/connect-data-platform/layer-setup.md b/website/docs/docs/core/connect-data-platform/layer-setup.md index f065c0c7313..051094297a2 100644 --- a/website/docs/docs/core/connect-data-platform/layer-setup.md +++ b/website/docs/docs/core/connect-data-platform/layer-setup.md @@ -17,34 +17,9 @@ meta: --- -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- - - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+ ### Profile Configuration diff --git a/website/docs/docs/core/connect-data-platform/materialize-setup.md b/website/docs/docs/core/connect-data-platform/materialize-setup.md index c8777c29490..70505fe1d65 100644 --- a/website/docs/docs/core/connect-data-platform/materialize-setup.md +++ b/website/docs/docs/core/connect-data-platform/materialize-setup.md @@ -6,7 +6,7 @@ meta: maintained_by: Materialize Inc. pypi_package: 'dbt-materialize' authors: 'Materialize team' - github_repo: 'MaterializeInc/materialize/blob/main/misc/dbt-materialize' + github_repo: 'MaterializeInc/materialize' min_core_version: 'v0.18.1' min_supported_version: 'v0.28.0' cloud_support: Not Supported @@ -22,32 +22,9 @@ Certain core functionality may vary. If you would like to report a bug, request ::: -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration, please refer to {frontMatter.meta.platform_name} Configuration.

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+ ## Connecting to Materialize diff --git a/website/docs/docs/core/connect-data-platform/mindsdb-setup.md b/website/docs/docs/core/connect-data-platform/mindsdb-setup.md index e6b8c5decaa..47d9d311ff9 100644 --- a/website/docs/docs/core/connect-data-platform/mindsdb-setup.md +++ b/website/docs/docs/core/connect-data-platform/mindsdb-setup.md @@ -19,35 +19,9 @@ meta: The dbt-mindsdb package allows dbt to connect to [MindsDB](https://github.com/mindsdb/mindsdb). -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- -## Installation - - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+ ## Configurations diff --git a/website/docs/docs/core/connect-data-platform/mssql-setup.md b/website/docs/docs/core/connect-data-platform/mssql-setup.md index 5efcc454823..f58827c3554 100644 --- a/website/docs/docs/core/connect-data-platform/mssql-setup.md +++ b/website/docs/docs/core/connect-data-platform/mssql-setup.md @@ -22,33 +22,9 @@ Some core functionality may be limited. If you're interested in contributing, ch ::: -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+ :::tip Default settings change in dbt-sqlserver v1.2 / ODBC Driver 18 diff --git a/website/docs/docs/core/connect-data-platform/mysql-setup.md b/website/docs/docs/core/connect-data-platform/mysql-setup.md index 1df6e205272..4b9224e0a0d 100644 --- a/website/docs/docs/core/connect-data-platform/mysql-setup.md +++ b/website/docs/docs/core/connect-data-platform/mysql-setup.md @@ -22,32 +22,9 @@ Some core functionality may be limited. If you're interested in contributing, ch ::: -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+ This is an experimental plugin: - It has not been tested extensively. diff --git a/website/docs/docs/core/connect-data-platform/oracle-setup.md b/website/docs/docs/core/connect-data-platform/oracle-setup.md index b1195fbd0a0..31e41f1a9a7 100644 --- a/website/docs/docs/core/connect-data-platform/oracle-setup.md +++ b/website/docs/docs/core/connect-data-platform/oracle-setup.md @@ -16,35 +16,10 @@ meta: config_page: '/reference/resource-configs/oracle-configs' --- -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
+ -## Installation - - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

### Configure the Python driver mode diff --git a/website/docs/docs/core/connect-data-platform/postgres-setup.md b/website/docs/docs/core/connect-data-platform/postgres-setup.md index f56d3f22576..ec03a205568 100644 --- a/website/docs/docs/core/connect-data-platform/postgres-setup.md +++ b/website/docs/docs/core/connect-data-platform/postgres-setup.md @@ -18,33 +18,9 @@ meta: -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+ ## Profile Configuration diff --git a/website/docs/docs/core/connect-data-platform/profiles.yml.md b/website/docs/docs/core/connect-data-platform/profiles.yml.md index 97254dda1c4..f8acb65f3d2 100644 --- a/website/docs/docs/core/connect-data-platform/profiles.yml.md +++ b/website/docs/docs/core/connect-data-platform/profiles.yml.md @@ -3,7 +3,7 @@ title: "About profiles.yml" id: profiles.yml --- -If you're using [dbt Core](/docs/core/about-dbt-core), you'll need a `profiles.yml` file that contains the connection details for your data platform. When you run dbt Core from the command line, it reads your `dbt_project.yml` file to find the `profile` name, and then looks for a profile with the same name in your `profiles.yml` file. This profile contains all the information dbt needs to connect to your data platform. +If you're using [dbt Core](/docs/core/installation-overview), you'll need a `profiles.yml` file that contains the connection details for your data platform. When you run dbt Core from the command line, it reads your `dbt_project.yml` file to find the `profile` name, and then looks for a profile with the same name in your `profiles.yml` file. This profile contains all the information dbt needs to connect to your data platform. For detailed info, you can refer to the [Connection profiles](/docs/core/connect-data-platform/connection-profiles). diff --git a/website/docs/docs/core/connect-data-platform/redshift-setup.md b/website/docs/docs/core/connect-data-platform/redshift-setup.md index 175d5f6a715..464d3b084d8 100644 --- a/website/docs/docs/core/connect-data-platform/redshift-setup.md +++ b/website/docs/docs/core/connect-data-platform/redshift-setup.md @@ -18,33 +18,9 @@ meta: -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specific configuration, refer to {frontMatter.meta.platform_name} Configuration.

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}.

+ ## Configurations @@ -70,8 +46,9 @@ pip is the easiest way to install the adapter: The authentication methods that dbt Core supports are: - `database` — Password-based authentication (default, will be used if `method` is not provided) -- `IAM` — IAM +- `IAM` — IAM +For dbt Cloud users, log in using the default **Database username** and **password**. This is necessary because dbt Cloud does not support `IAM` authentication. Click on one of these authentication methods for further details on how to configure your connection profile. Each tab also includes an example `profiles.yml` configuration file for you to review. diff --git a/website/docs/docs/core/connect-data-platform/rockset-setup.md b/website/docs/docs/core/connect-data-platform/rockset-setup.md index 4a146829a03..372a6c0c538 100644 --- a/website/docs/docs/core/connect-data-platform/rockset-setup.md +++ b/website/docs/docs/core/connect-data-platform/rockset-setup.md @@ -22,33 +22,9 @@ Certain core functionality may vary. If you would like to report a bug, request ::: -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+ ## Connecting to Rockset with **dbt-rockset** diff --git a/website/docs/docs/core/connect-data-platform/singlestore-setup.md b/website/docs/docs/core/connect-data-platform/singlestore-setup.md index a63466542a9..285c41bafc9 100644 --- a/website/docs/docs/core/connect-data-platform/singlestore-setup.md +++ b/website/docs/docs/core/connect-data-platform/singlestore-setup.md @@ -22,35 +22,9 @@ Certain core functionality may vary. If you would like to report a bug, request ::: -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- -## Installation and Distribution - - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+ ### Set up a SingleStore Target diff --git a/website/docs/docs/core/connect-data-platform/snowflake-setup.md b/website/docs/docs/core/connect-data-platform/snowflake-setup.md index 98bcf447fed..2b426ef667b 100644 --- a/website/docs/docs/core/connect-data-platform/snowflake-setup.md +++ b/website/docs/docs/core/connect-data-platform/snowflake-setup.md @@ -18,33 +18,9 @@ meta: -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+ ## Authentication Methods diff --git a/website/docs/docs/core/connect-data-platform/spark-setup.md b/website/docs/docs/core/connect-data-platform/spark-setup.md index 895f0559953..93595cea3f6 100644 --- a/website/docs/docs/core/connect-data-platform/spark-setup.md +++ b/website/docs/docs/core/connect-data-platform/spark-setup.md @@ -24,26 +24,10 @@ meta: See [Databricks setup](#databricks-setup) for the Databricks version of this page. ::: -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
+ -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

If connecting to Databricks via ODBC driver, it requires `pyodbc`. Depending on your system, you can install it seperately or via pip. See the [`pyodbc` wiki](https://github.com/mkleehammer/pyodbc/wiki/Install) for OS-specific installation details. @@ -51,15 +35,15 @@ If connecting to a Spark cluster via the generic thrift or http methods, it requ ```zsh # odbc connections -$ pip install "dbt-spark[ODBC]" +$ python -m pip install "dbt-spark[ODBC]" # thrift or http connections -$ pip install "dbt-spark[PyHive]" +$ python -m pip install "dbt-spark[PyHive]" ``` ```zsh # session connections -$ pip install "dbt-spark[session]" +$ python -m pip install "dbt-spark[session]" ```

Configuring {frontMatter.meta.pypi_package}

@@ -70,7 +54,7 @@ $ pip install "dbt-spark[session]" ## Connection Methods -dbt-spark can connect to Spark clusters by three different methods: +dbt-spark can connect to Spark clusters by four different methods: - [`odbc`](#odbc) is the preferred method when connecting to Databricks. It supports connecting to a SQL Endpoint or an all-purpose interactive cluster. - [`thrift`](#thrift) connects directly to the lead node of a cluster, either locally hosted / on premise or in the cloud (e.g. Amazon EMR). diff --git a/website/docs/docs/core/connect-data-platform/sqlite-setup.md b/website/docs/docs/core/connect-data-platform/sqlite-setup.md index 3da902a6f80..20897ea90d7 100644 --- a/website/docs/docs/core/connect-data-platform/sqlite-setup.md +++ b/website/docs/docs/core/connect-data-platform/sqlite-setup.md @@ -22,34 +22,9 @@ Some core functionality may be limited. If you're interested in contributing, ch ::: -

Overview of {frontMatter.meta.pypi_package}

- -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; + Starting with the release of dbt-core 1.0.0, versions of dbt-sqlite are aligned to the same major+minor [version](https://semver.org/) of dbt-core. - versions 1.1.x of this adapter work with dbt-core 1.1.x diff --git a/website/docs/docs/core/connect-data-platform/starrocks-setup.md b/website/docs/docs/core/connect-data-platform/starrocks-setup.md index e5c1abac037..485e1d18fb7 100644 --- a/website/docs/docs/core/connect-data-platform/starrocks-setup.md +++ b/website/docs/docs/core/connect-data-platform/starrocks-setup.md @@ -34,7 +34,7 @@ meta: pip is the easiest way to install the adapter: -pip install {frontMatter.meta.pypi_package} +python -m pip install {frontMatter.meta.pypi_package}

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

diff --git a/website/docs/docs/core/connect-data-platform/teradata-setup.md b/website/docs/docs/core/connect-data-platform/teradata-setup.md index 1ba8e506b88..1a30a1a4a54 100644 --- a/website/docs/docs/core/connect-data-platform/teradata-setup.md +++ b/website/docs/docs/core/connect-data-platform/teradata-setup.md @@ -19,29 +19,12 @@ meta: Some core functionality may be limited. If you're interested in contributing, check out the source code for the repository listed below. -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
+ -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Python compatibility

+## Python compatibility | Plugin version | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 | Python 3.11 | | -------------- | ----------- | ----------- | ----------- | ----------- | ----------- | ------------ | @@ -56,18 +39,12 @@ pip is the easiest way to install the adapter: |1.5.x | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ |1.6.x | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ -

dbt dependent packages version compatibility

+## dbt dependent packages version compatibility | dbt-teradata | dbt-core | dbt-teradata-util | dbt-util | |--------------|------------|-------------------|----------------| | 1.2.x | 1.2.x | 0.1.0 | 0.9.x or below | - - -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+| 1.6.7 | 1.6.7 | 1.1.1 | 1.1.1 | ### Connecting to Teradata diff --git a/website/docs/docs/core/connect-data-platform/tidb-setup.md b/website/docs/docs/core/connect-data-platform/tidb-setup.md index e2205c4665e..253497b37ba 100644 --- a/website/docs/docs/core/connect-data-platform/tidb-setup.md +++ b/website/docs/docs/core/connect-data-platform/tidb-setup.md @@ -24,34 +24,9 @@ If you're interested in contributing, check out the source code repository liste ::: -

Overview of {frontMatter.meta.pypi_package}

- -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; + ## Connecting to TiDB with **dbt-tidb** diff --git a/website/docs/docs/core/connect-data-platform/trino-setup.md b/website/docs/docs/core/connect-data-platform/trino-setup.md index 39d8ed8ab3f..a7dc658358f 100644 --- a/website/docs/docs/core/connect-data-platform/trino-setup.md +++ b/website/docs/docs/core/connect-data-platform/trino-setup.md @@ -18,38 +18,9 @@ meta: -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- -:::info Vendor-supported plugin - -Certain core functionality may vary. If you would like to report a bug, request a feature, or contribute, you can check out the linked repository and open an issue. - -::: - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: - -pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.platform_name}-specifc configuration please refer to {frontMatter.meta.platform_name} Configuration

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}

+ ## Connecting to Starburst/Trino @@ -284,7 +255,7 @@ The only authentication parameter to set for OAuth 2.0 is `method: oauth`. If yo For more information, refer to both [OAuth 2.0 authentication](https://trino.io/docs/current/security/oauth2.html) in the Trino docs and the [README](https://github.com/trinodb/trino-python-client#oauth2-authentication) for the Trino Python client. -It's recommended that you install `keyring` to cache the OAuth 2.0 token over multiple dbt invocations by running `pip install 'trino[external-authentication-token-cache]'`. The `keyring` package is not installed by default. +It's recommended that you install `keyring` to cache the OAuth 2.0 token over multiple dbt invocations by running `python -m pip install 'trino[external-authentication-token-cache]'`. The `keyring` package is not installed by default. #### Example profiles.yml for OAuth diff --git a/website/docs/docs/core/connect-data-platform/upsolver-setup.md b/website/docs/docs/core/connect-data-platform/upsolver-setup.md index 6b2f410fc07..8e4203e0b0c 100644 --- a/website/docs/docs/core/connect-data-platform/upsolver-setup.md +++ b/website/docs/docs/core/connect-data-platform/upsolver-setup.md @@ -33,7 +33,7 @@ pagination_next: null pip is the easiest way to install the adapter: -pip install {frontMatter.meta.pypi_package} +python -m pip install {frontMatter.meta.pypi_package}

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

diff --git a/website/docs/docs/core/connect-data-platform/vertica-setup.md b/website/docs/docs/core/connect-data-platform/vertica-setup.md index fbb8de6b301..b1424289137 100644 --- a/website/docs/docs/core/connect-data-platform/vertica-setup.md +++ b/website/docs/docs/core/connect-data-platform/vertica-setup.md @@ -6,9 +6,9 @@ meta: authors: 'Vertica (Former authors: Matthew Carter, Andy Regan, Andrew Hedengren)' github_repo: 'vertica/dbt-vertica' pypi_package: 'dbt-vertica' - min_core_version: 'v1.4.0 and newer' + min_core_version: 'v1.6.0 and newer' cloud_support: 'Not Supported' - min_supported_version: 'Vertica 12.0.0' + min_supported_version: 'Vertica 23.4.0' slack_channel_name: 'n/a' slack_channel_link: 'https://www.getdbt.com/community/' platform_name: 'Vertica' @@ -21,31 +21,9 @@ If you're interested in contributing, check out the source code for each reposit ::: -

Overview of {frontMatter.meta.pypi_package}

+import SetUpPages from '/snippets/_setup-pages-intro.md'; -
    -
  • Maintained by: {frontMatter.meta.maintained_by}
  • -
  • Authors: {frontMatter.meta.authors}
  • -
  • GitHub repo: {frontMatter.meta.github_repo}
  • -
  • PyPI package: {frontMatter.meta.pypi_package}
  • -
  • Slack channel: {frontMatter.meta.slack_channel_name}
  • -
  • Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
  • -
  • dbt Cloud support: {frontMatter.meta.cloud_support}
  • -
  • Minimum data platform version: {frontMatter.meta.min_supported_version}
  • -
- - -

Installing {frontMatter.meta.pypi_package}

- -pip is the easiest way to install the adapter: pip install {frontMatter.meta.pypi_package} - -

Installing {frontMatter.meta.pypi_package} will also install dbt-core and any other dependencies.

- -

Configuring {frontMatter.meta.pypi_package}

- -

For {frontMatter.meta.pypi_package} specific configuration please refer to {frontMatter.meta.platform_name} Configuration.

- -

For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}.

+

Connecting to {frontMatter.meta.platform_name} with {frontMatter.meta.pypi_package}

diff --git a/website/docs/docs/core/docker-install.md b/website/docs/docs/core/docker-install.md index dfb2a669e34..6c1ec9da9e1 100644 --- a/website/docs/docs/core/docker-install.md +++ b/website/docs/docs/core/docker-install.md @@ -5,13 +5,13 @@ description: "You can use Docker to install dbt and adapter plugins from the com dbt Core and all adapter plugins maintained by dbt Labs are available as [Docker](https://docs.docker.com/) images, and distributed via [GitHub Packages](https://docs.github.com/en/packages/learn-github-packages/introduction-to-github-packages) in a [public registry](https://github.com/dbt-labs/dbt-core/pkgs/container/dbt-core). -Using a prebuilt Docker image to install dbt Core in production has a few benefits: it already includes dbt-core, one or more database adapters, and pinned versions of all their dependencies. By contrast, `pip install dbt-core dbt-` takes longer to run, and will always install the latest compatible versions of every dependency. +Using a prebuilt Docker image to install dbt Core in production has a few benefits: it already includes dbt-core, one or more database adapters, and pinned versions of all their dependencies. By contrast, `python -m pip install dbt-core dbt-` takes longer to run, and will always install the latest compatible versions of every dependency. You might also be able to use Docker to install and develop locally if you don't have a Python environment set up. Note that running dbt in this manner can be significantly slower if your operating system differs from the system that built the Docker image. If you're a frequent local developer, we recommend that you install dbt Core via [Homebrew](/docs/core/homebrew-install) or [pip](/docs/core/pip-install) instead. ### Prerequisites * You've installed Docker. For more information, see the [Docker](https://docs.docker.com/) site. -* You understand which database adapter(s) you need. For more information, see [About dbt adapters](/docs/core/installation#about-dbt-adapters). +* You understand which database adapter(s) you need. For more information, see [About dbt adapters](docs/core/installation-overview#about-dbt-data-platforms-and-adapters). * You understand how dbt Core is versioned. For more information, see [About dbt Core versions](/docs/dbt-versions/core). * You have a general understanding of the dbt, dbt workflow, developing locally in the command line interface (CLI). For more information, see [About dbt](/docs/introduction#how-do-i-use-dbt). diff --git a/website/docs/docs/core/installation-overview.md b/website/docs/docs/core/installation-overview.md index cb1df26b0f8..8c139012667 100644 --- a/website/docs/docs/core/installation-overview.md +++ b/website/docs/docs/core/installation-overview.md @@ -1,25 +1,35 @@ --- -title: "About installing dbt" -id: "installation" +title: "About dbt Core and installation" description: "You can install dbt Core using a few different tested methods." pagination_next: "docs/core/homebrew-install" pagination_prev: null --- +[dbt Core](https://github.com/dbt-labs/dbt-core) is an open sourced project where you can develop from the command line and run your dbt project. + +To use dbt Core, your workflow generally looks like: + +1. **Build your dbt project in a code editor —** popular choices include VSCode and Atom. + +2. **Run your project from the command line —** macOS ships with a default Terminal program, however you can also use iTerm or the command line prompt within a code editor to execute dbt commands. + +:::info How we set up our computers for working on dbt projects + +We've written a [guide](https://discourse.getdbt.com/t/how-we-set-up-our-computers-for-working-on-dbt-projects/243) for our recommended setup when running dbt projects using dbt Core. + +::: + +If you're using the command line, we recommend learning some basics of your terminal to help you work more effectively. In particular, it's important to understand `cd`, `ls` and `pwd` to be able to navigate through the directory structure of your computer easily. + +## Install dbt Core + You can install dbt Core on the command line by using one of these methods: - [Use pip to install dbt](/docs/core/pip-install) (recommended) - [Use Homebrew to install dbt](/docs/core/homebrew-install) - [Use a Docker image to install dbt](/docs/core/docker-install) - [Install dbt from source](/docs/core/source-install) - -:::tip Pro tip: Using the --help flag - -Most command-line tools, including dbt, have a `--help` flag that you can use to show available commands and arguments. For example, you can use the `--help` flag with dbt in two ways:

-— `dbt --help`: Lists the commands available for dbt
-— `dbt run --help`: Lists the flags available for the `run` command - -::: +- You can also develop locally using the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation). The dbt Cloud CLI and dbt Core are both command line tools that let you run dbt commands. The key distinction is the dbt Cloud CLI is tailored for dbt Cloud's infrastructure and integrates with all its [features](/docs/cloud/about-cloud/dbt-cloud-features). ## Upgrading dbt Core @@ -32,3 +42,11 @@ dbt provides a number of resources for understanding [general best practices](/b ## About dbt data platforms and adapters dbt works with a number of different data platforms (databases, query engines, and other SQL-speaking technologies). It does this by using a dedicated _adapter_ for each. When you install dbt Core, you'll also want to install the specific adapter for your database. For more details, see [Supported Data Platforms](/docs/supported-data-platforms). + +:::tip Pro tip: Using the --help flag + +Most command-line tools, including dbt, have a `--help` flag that you can use to show available commands and arguments. For example, you can use the `--help` flag with dbt in two ways:

+— `dbt --help`: Lists the commands available for dbt
+— `dbt run --help`: Lists the flags available for the `run` command + +::: diff --git a/website/docs/docs/core/pip-install.md b/website/docs/docs/core/pip-install.md index 44fac00e493..e1a0e65312c 100644 --- a/website/docs/docs/core/pip-install.md +++ b/website/docs/docs/core/pip-install.md @@ -39,7 +39,7 @@ alias env_dbt='source /bin/activate' Once you know [which adapter](/docs/supported-data-platforms) you're using, you can install it as `dbt-`. For example, if using Postgres: ```shell -pip install dbt-postgres +python -m pip install dbt-postgres ``` This will install `dbt-core` and `dbt-postgres` _only_: @@ -62,7 +62,7 @@ All adapters build on top of `dbt-core`. Some also depend on other adapters: for To upgrade a specific adapter plugin: ```shell -pip install --upgrade dbt- +python -m pip install --upgrade dbt- ``` ### Install dbt-core only @@ -70,7 +70,7 @@ pip install --upgrade dbt- If you're building a tool that integrates with dbt Core, you may want to install the core library alone, without a database adapter. Note that you won't be able to use dbt as a CLI tool. ```shell -pip install dbt-core +python -m pip install dbt-core ``` ### Change dbt Core versions @@ -79,13 +79,13 @@ You can upgrade or downgrade versions of dbt Core by using the `--upgrade` optio To upgrade dbt to the latest version: ``` -pip install --upgrade dbt-core +python -m pip install --upgrade dbt-core ``` To downgrade to an older version, specify the version you want to use. This command can be useful when you're resolving package dependencies. As an example: ``` -pip install --upgrade dbt-core==0.19.0 +python -m pip install --upgrade dbt-core==0.19.0 ``` ### `pip install dbt` @@ -95,7 +95,7 @@ Note that, as of v1.0.0, `pip install dbt` is no longer supported and will raise If you have workflows or integrations that relied on installing the package named `dbt`, you can achieve the same behavior going forward by installing the same five packages that it used: ```shell -pip install \ +python -m pip install \ dbt-core \ dbt-postgres \ dbt-redshift \ diff --git a/website/docs/docs/core/source-install.md b/website/docs/docs/core/source-install.md index 42086159c03..d17adc13c53 100644 --- a/website/docs/docs/core/source-install.md +++ b/website/docs/docs/core/source-install.md @@ -17,10 +17,10 @@ To install `dbt-core` from the GitHub code source: ```shell git clone https://github.com/dbt-labs/dbt-core.git cd dbt-core -pip install -r requirements.txt +python -m pip install -r requirements.txt ``` -This will install `dbt-core` and `dbt-postgres`. To install in editable mode (includes your local changes as you make them), use `pip install -e editable-requirements.txt` instead. +This will install `dbt-core` and `dbt-postgres`. To install in editable mode (includes your local changes as you make them), use `python -m pip install -e editable-requirements.txt` instead. ### Installing adapter plugins @@ -29,12 +29,12 @@ To install an adapter plugin from source, you will need to first locate its sour ```shell git clone https://github.com/dbt-labs/dbt-redshift.git cd dbt-redshift -pip install . +python -m pip install . ``` You do _not_ need to install `dbt-core` before installing an adapter plugin -- the plugin includes `dbt-core` among its dependencies, and it will install the latest compatible version automatically. -To install in editable mode, such as while contributing, use `pip install -e .` instead. +To install in editable mode, such as while contributing, use `python -m pip install -e .` instead. diff --git a/website/docs/docs/dbt-cloud-apis/project-state.md b/website/docs/docs/dbt-cloud-apis/project-state.md index a5ee71ebb1b..62136b35463 100644 --- a/website/docs/docs/dbt-cloud-apis/project-state.md +++ b/website/docs/docs/dbt-cloud-apis/project-state.md @@ -66,7 +66,7 @@ Most Discovery API use cases will favor the _applied state_ since it pertains to | Seed | Yes | Yes | Yes | Downstream | Applied & definition | | Snapshot | Yes | Yes | Yes | Upstream & downstream | Applied & definition | | Test | Yes | Yes | No | Upstream | Applied & definition | -| Exposure | No | No | No | Upstream | Applied & definition | +| Exposure | No | No | No | Upstream | Definition | | Metric | No | No | No | Upstream & downstream | Definition | | Semantic model | No | No | No | Upstream & downstream | Definition | | Group | No | No | No | Downstream | Definition | diff --git a/website/docs/docs/dbt-cloud-apis/service-tokens.md b/website/docs/docs/dbt-cloud-apis/service-tokens.md index 9553f48a013..f1369711d2b 100644 --- a/website/docs/docs/dbt-cloud-apis/service-tokens.md +++ b/website/docs/docs/dbt-cloud-apis/service-tokens.md @@ -115,3 +115,5 @@ To rotate your token: 4. Copy the new token and replace the old one in your systems. Store it in a safe place, as it will not be available again once the creation screen is closed. 5. Delete the old token in dbt Cloud by clicking the **trash can icon**. _Only take this action after the new token is in place to avoid service disruptions_. +## FAQs + diff --git a/website/docs/docs/dbt-cloud-apis/sl-graphql.md b/website/docs/docs/dbt-cloud-apis/sl-graphql.md index 0e39f50f60a..b7d13d0d453 100644 --- a/website/docs/docs/dbt-cloud-apis/sl-graphql.md +++ b/website/docs/docs/dbt-cloud-apis/sl-graphql.md @@ -48,7 +48,7 @@ Authentication uses a dbt Cloud [service account tokens](/docs/dbt-cloud-apis/se {"Authorization": "Bearer "} ``` -Each GQL request also requires a dbt Cloud `environmentId`. The API uses both the service token in the header and environmentId for authentication. +Each GQL request also requires a dbt Cloud `environmentId`. The API uses both the service token in the header and `environmentId` for authentication. ### Metadata calls @@ -150,6 +150,60 @@ metricsForDimensions( ): [Metric!]! ``` +**Metric Types** + +```graphql +Metric { + name: String! + description: String + type: MetricType! + typeParams: MetricTypeParams! + filter: WhereFilter + dimensions: [Dimension!]! + queryableGranularities: [TimeGranularity!]! +} +``` + +``` +MetricType = [SIMPLE, RATIO, CUMULATIVE, DERIVED] +``` + +**Metric Type parameters** + +```graphql +MetricTypeParams { + measure: MetricInputMeasure + inputMeasures: [MetricInputMeasure!]! + numerator: MetricInput + denominator: MetricInput + expr: String + window: MetricTimeWindow + grainToDate: TimeGranularity + metrics: [MetricInput!] +} +``` + + +**Dimension Types** + +```graphql +Dimension { + name: String! + description: String + type: DimensionType! + typeParams: DimensionTypeParams + isPartition: Boolean! + expr: String + queryableGranularities: [TimeGranularity!]! +} +``` + +``` +DimensionType = [CATEGORICAL, TIME] +``` + +### Querying + **Create Dimension Values query** ```graphql @@ -205,59 +259,128 @@ query( ): QueryResult! ``` -**Metric Types** +The GraphQL API uses a polling process for querying since queries can be long-running in some cases. It works by first creating a query with a mutation, `createQuery, which returns a query ID. This ID is then used to continuously check (poll) for the results and status of your query. The typical flow would look as follows: +1. Kick off a query ```graphql -Metric { - name: String! - description: String - type: MetricType! - typeParams: MetricTypeParams! - filter: WhereFilter - dimensions: [Dimension!]! - queryableGranularities: [TimeGranularity!]! +mutation { + createQuery( + environmentId: 123456 + metrics: [{name: "order_total"}] + groupBy: [{name: "metric_time"}] + ) { + queryId # => Returns 'QueryID_12345678' + } } ``` - -``` -MetricType = [SIMPLE, RATIO, CUMULATIVE, DERIVED] +2. Poll for results +```graphql +{ + query(environmentId: 123456, queryId: "QueryID_12345678") { + sql + status + error + totalPages + jsonResult + arrowResult + } +} ``` +3. Keep querying 2. at an appropriate interval until status is `FAILED` or `SUCCESSFUL` + +### Output format and pagination + +**Output format** + +By default, the output is in Arrow format. You can switch to JSON format using the following parameter. However, due to performance limitations, we recommend using the JSON parameter for testing and validation. The JSON received is a base64 encoded string. To access it, you can decode it using a base64 decoder. The JSON is created from pandas, which means you can change it back to a dataframe using `pandas.read_json(json, orient="table")`. Or you can work with the data directly using `json["data"]`, and find the table schema using `json["schema"]["fields"]`. Alternatively, you can pass `encoded:false` to the jsonResult field to get a raw JSON string directly. -**Metric Type parameters** ```graphql -MetricTypeParams { - measure: MetricInputMeasure - inputMeasures: [MetricInputMeasure!]! - numerator: MetricInput - denominator: MetricInput - expr: String - window: MetricTimeWindow - grainToDate: TimeGranularity - metrics: [MetricInput!] +{ + query(environmentId: BigInt!, queryId: Int!, pageNum: Int! = 1) { + sql + status + error + totalPages + arrowResult + jsonResult(orient: PandasJsonOrient! = TABLE, encoded: Boolean! = true) + } } ``` +The results default to the table but you can change it to any [pandas](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_json.html) supported value. -**Dimension Types** +**Pagination** -```graphql -Dimension { - name: String! - description: String - type: DimensionType! - typeParams: DimensionTypeParams - isPartition: Boolean! - expr: String - queryableGranularities: [TimeGranularity!]! +By default, we return 1024 rows per page. If your result set exceeds this, you need to increase the page number using the `pageNum` option. + +### Run a Python query + +The `arrowResult` in the GraphQL query response is a byte dump, which isn't visually useful. You can convert this byte data into an Arrow table using any Arrow-supported language. Refer to the following Python example explaining how to query and decode the arrow result: + + +```python +import base64 +import pyarrow as pa +import time + +headers = {"Authorization":"Bearer "} +query_result_request = """ +{ + query(environmentId: 70, queryId: "12345678") { + sql + status + error + arrowResult + } } -``` +""" -``` -DimensionType = [CATEGORICAL, TIME] +while True: + gql_response = requests.post( + "https://semantic-layer.cloud.getdbt.com/api/graphql", + json={"query": query_result_request}, + headers=headers, + ) + if gql_response.json()["data"]["status"] in ["FAILED", "SUCCESSFUL"]: + break + # Set an appropriate interval between polling requests + time.sleep(1) + +""" +gql_response.json() => +{ + "data": { + "query": { + "sql": "SELECT\n ordered_at AS metric_time__day\n , SUM(order_total) AS order_total\nFROM semantic_layer.orders orders_src_1\nGROUP BY\n ordered_at", + "status": "SUCCESSFUL", + "error": null, + "arrowResult": "arrow-byte-data" + } + } +} +""" + +def to_arrow_table(byte_string: str) -> pa.Table: + """Get a raw base64 string and convert to an Arrow Table.""" + with pa.ipc.open_stream(base64.b64decode(res)) as reader: + return pa.Table.from_batches(reader, reader.schema) + + +arrow_table = to_arrow_table(gql_response.json()["data"]["query"]["arrowResult"]) + +# Perform whatever functionality is available, like convert to a pandas table. +print(arrow_table.to_pandas()) +""" +order_total ordered_at + 3 2023-08-07 + 112 2023-08-08 + 12 2023-08-09 + 5123 2023-08-10 +""" ``` -### Create Query examples +### Additional Create Query examples The following section provides query examples for the GraphQL API, such as how to query metrics, dimensions, where filters, and more. @@ -282,7 +405,7 @@ mutation { createQuery( environmentId: BigInt! metrics: [{name: "order_total"}] - groupBy: [{name: "metric_time", grain: "month"}] + groupBy: [{name: "metric_time", grain: MONTH}] ) { queryId } @@ -298,7 +421,7 @@ mutation { createQuery( environmentId: BigInt! metrics: [{name: "food_order_amount"}, {name: "order_gross_profit"}] - groupBy: [{name: "metric_time, grain: "month"}, {name: "customer__customer_type"}] + groupBy: [{name: "metric_time, grain: MONTH}, {name: "customer__customer_type"}] ) { queryId } @@ -320,7 +443,7 @@ mutation { createQuery( environmentId: BigInt! metrics:[{name: "order_total"}] - groupBy:[{name: "customer__customer_type"}, {name: "metric_time", grain: "month"}] + groupBy:[{name: "customer__customer_type"}, {name: "metric_time", grain: MONTH}] where:[{sql: "{{ Dimension('customer__customer_type') }} = 'new'"}, {sql:"{{ Dimension('metric_time').grain('month') }} > '2022-10-01'"}] ) { queryId @@ -335,8 +458,8 @@ mutation { createQuery( environmentId: BigInt! metrics: [{name: "order_total"}] - groupBy: [{name: "metric_time", grain: "month"}] - orderBy: [{metric: {name: "order_total"}}, {groupBy: {name: "metric_time", grain: "month"}, descending:true}] + groupBy: [{name: "metric_time", grain: MONTH}] + orderBy: [{metric: {name: "order_total"}}, {groupBy: {name: "metric_time", grain: MONTH}, descending:true}] ) { queryId } @@ -351,7 +474,7 @@ mutation { createQuery( environmentId: BigInt! metrics: [{name:"food_order_amount"}, {name: "order_gross_profit"}] - groupBy: [{name:"metric_time, grain: "month"}, {name: "customer__customer_type"}] + groupBy: [{name:"metric_time, grain: MONTH}, {name: "customer__customer_type"}] limit: 10 ) { queryId @@ -359,7 +482,7 @@ mutation { } ``` -**Query with Explain** +**Query with just compiling SQL** This takes the same inputs as the `createQuery` mutation. @@ -368,95 +491,9 @@ mutation { compileSql( environmentId: BigInt! metrics: [{name:"food_order_amount"} {name:"order_gross_profit"}] - groupBy: [{name:"metric_time, grain:"month"}, {name:"customer__customer_type"}] + groupBy: [{name:"metric_time, grain: MONTH}, {name:"customer__customer_type"}] ) { sql } } ``` - -### Output format and pagination - -**Output format** - -By default, the output is in Arrow format. You can switch to JSON format using the following parameter. However, due to performance limitations, we recommend using the JSON parameter for testing and validation. The JSON received is a base64 encoded string. To access it, you can decode it using a base64 decoder. The JSON is created from pandas, which means you can change it back to a dataframe using `pandas.read_json(json, orient="table")`. Or you can work with the data directly using `json["data"]`, and find the table schema using `json["schema"]["fields"]`. Alternatively, you can pass `encoded:false` to the jsonResult field to get a raw JSON string directly. - - -```graphql -{ - query(environmentId: BigInt!, queryId: Int!, pageNum: Int! = 1) { - sql - status - error - totalPages - arrowResult - jsonResult(orient: PandasJsonOrient! = TABLE, encoded: Boolean! = true) - } -} -``` - -The results default to the table but you can change it to any [pandas](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_json.html) supported value. - -**Pagination** - -By default, we return 1024 rows per page. If your result set exceeds this, you need to increase the page number using the `pageNum` option. - -### Run a Python query - -The `arrowResult` in the GraphQL query response is a byte dump, which isn't visually useful. You can convert this byte data into an Arrow table using any Arrow-supported language. Refer to the following Python example explaining how to query and decode the arrow result: - - -```python -import base64 -import pyarrow as pa - -headers = {"Authorization":"Bearer "} -query_result_request = """ -{ - query(environmentId: 70, queryId: "12345678") { - sql - status - error - arrowResult - } -} -""" - -gql_response = requests.post( - "https://semantic-layer.cloud.getdbt.com/api/graphql", - json={"query": query_result_request}, - headers=headers, -) - -""" -gql_response.json() => -{ - "data": { - "query": { - "sql": "SELECT\n ordered_at AS metric_time__day\n , SUM(order_total) AS order_total\nFROM semantic_layer.orders orders_src_1\nGROUP BY\n ordered_at", - "status": "SUCCESSFUL", - "error": null, - "arrowResult": "arrow-byte-data" - } - } -} -""" - -def to_arrow_table(byte_string: str) -> pa.Table: - """Get a raw base64 string and convert to an Arrow Table.""" - with pa.ipc.open_stream(base64.b64decode(res)) as reader: - return pa.Table.from_batches(reader, reader.schema) - - -arrow_table = to_arrow_table(gql_response.json()["data"]["query"]["arrowResult"]) - -# Perform whatever functionality is available, like convert to a pandas table. -print(arrow_table.to_pandas()) -""" -order_total ordered_at - 3 2023-08-07 - 112 2023-08-08 - 12 2023-08-09 - 5123 2023-08-10 -""" -``` diff --git a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md index 931666dd10c..aba309566f8 100644 --- a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md +++ b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md @@ -352,6 +352,8 @@ semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'], ## FAQs + + - **Why do some dimensions use different syntax, like `metric_time` versus `[Dimension('metric_time')`?**
When you select a dimension on its own, such as `metric_time` you can use the shorthand method which doesn't need the “Dimension” syntax. However, when you perform operations on the dimension, such as adding granularity, the object syntax `[Dimension('metric_time')` is required. diff --git a/website/docs/docs/dbt-support.md b/website/docs/docs/dbt-support.md index 513d5fff588..40968b9d763 100644 --- a/website/docs/docs/dbt-support.md +++ b/website/docs/docs/dbt-support.md @@ -5,34 +5,62 @@ pagination_next: null pagination_prev: null --- +Support for dbt is available to all users through the following channels: + +- Dedicated dbt Support team (dbt Cloud users). +- [The Community Forum](https://discourse.getdbt.com/). +- [dbt Community slack](https://www.getdbt.com/community/join-the-community/). + ## dbt Core support If you're developing on the command line (CLI) and have questions or need some help — reach out to the helpful dbt community through [the Community Forum](https://discourse.getdbt.com/) or [dbt Community slack](https://www.getdbt.com/community/join-the-community/). ## dbt Cloud support -We want to help you work through implementing and utilizing dbt Cloud at your organization. Have a question you can't find an answer to in [our docs](https://docs.getdbt.com/) or [the Community Forum](https://discourse.getdbt.com/)? Our Support team is here to `dbt help` you! -Check out our guide on [getting help](/community/resources/getting-help) - half of the problem is often knowing where to look... and how to ask good questions! +The global dbt Support team is available to dbt Cloud customers by email or in-product live chat. We want to help you work through implementing and utilizing dbt Cloud at your organization. Have a question you can't find an answer to in [our docs](https://docs.getdbt.com/) or [the Community Forum](https://discourse.getdbt.com/)? Our Support team is here to `dbt help` you! + +- **Enterprise plans** — Priority [support](#severity-level-for-enterprise-support), options for custom support coverage hours, implementation assistance, dedicated management, and dbt Labs security reviews depending on price point. +- **Developer and Team plans** — 24x5 support (no service level agreement (SLA); [contact Sales](https://www.getdbt.com/pricing/) for Enterprise plan inquiries). +- **Support team help** — Assistance with dbt Cloud questions, like project setup, login issues, error understanding, setup private packages, link to a new GitHub account, and so on. +- **Resource guide** — Check the [guide](/community/resources/getting-help) for effective help-seeking strategies. + +
+Example of common support questions +Types of dbt Cloud-related questions our Support team can assist you with, regardless of your dbt Cloud plan:

+How do I...
+ - set up a dbt Cloud project?
+ - set up a private package in dbt Cloud?
+ - configure custom branches on git repos?
+ - link dbt to a new GitHub account?

+Help! I can't...
+ - log in.
+ - access logs.
+ - update user groups.

+I need help understanding...
+ - why this run failed.
+ - why I am getting this error message in dbt Cloud?
+ - why my CI jobs are not kicking off as expected.
+
-Types of dbt Cloud-related questions our Support team can assist you with, regardless of your dbt Cloud plan: -- **How do I...** - - set up a dbt Cloud project? - - set up a private package in dbt Cloud? - - configure custom branches on git repos? - - link dbt to a new github account? -- **Help! I can't...** - - log in. - - access logs. - - update user groups. -- **I need help understanding...** - - why this run failed. - - why I am getting this error message in dbt Cloud. - - why my CI jobs are not kicking off as expected. + +## dbt Cloud Enterprise accounts -### dbt Cloud Enterprise accounts +Basic assistance with dbt project troubleshooting. +Help with errors and issues in macros, models, and dbt Labs' packages. +For strategic advice, expansion, and project setup, consult Solutions Architect and Sales Director. -For customers on a dbt Cloud Enterprise plan, we **also** offer basic assistance in troubleshooting issues with your dbt project. +For customers on a dbt Cloud Enterprise plan, we **also** offer basic assistance in troubleshooting issues with your dbt project: - **Something isn't working the way I would expect it to...** - in a macro I created... - in an incremental model I'm building... @@ -50,5 +78,20 @@ Types of questions you should ask your Solutions Architect and Sales Director: - Here is our data road map for the next year - can we talk through how dbt fits into it and what features we may not be utilizing that can help us achieve our goals? - It is time for our contract renewal, what options do I have? +### Severity level for Enterprise support + +Support tickets are assigned a severity level based on the impact of the issue on your business. The severity level is assigned by dbt Labs, and the level assigned determines the priority level of support you will receive. For specific ticket response time or other questions that relate to your Enterprise account’s SLA, please refer to your Enterprise contract. + +| Severity Level | Description | +| -------------- | ----------- | +| Severity Level 1 | Any Error which makes the use or continued use of the Subscription or material features impossible; Subscription is not operational, with no alternative available. | +| Severity Level 2 | Feature failure, without a workaround, but Subscription is operational. | +| Severity Level 3 | Feature failure, but a workaround exists. | +| Severity Level 4 | Error with low-to-no impact on Client’s access to or use of the Subscription, or Client has a general question or feature enhancement request. | + +## External help -When you need help writing SQL, reviewing the overall performance of your project, or want someone to actually help build your dbt project, check out our list of [dbt Preferred Consulting Providers](https://www.getdbt.com/ecosystem/) or our [Services](https://www.getdbt.com/dbt-labs/services/) page! +For SQL writing, project performance review, or project building, refer to dbt Preferred Consulting Providers and dbt Labs' Services. +For help writing SQL, reviewing the overall performance of your project, or want someone to actually help build your dbt project, refer to the following pages: +- List of [dbt Preferred Consulting Providers](https://www.getdbt.com/ecosystem/). +- dbt Labs' [Services](https://www.getdbt.com/dbt-labs/services/). diff --git a/website/docs/docs/dbt-versions/core-upgrade/00-upgrading-to-v1.7.md b/website/docs/docs/dbt-versions/core-upgrade/00-upgrading-to-v1.7.md index 9ebd3c64cf3..af098860e6f 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/00-upgrading-to-v1.7.md +++ b/website/docs/docs/dbt-versions/core-upgrade/00-upgrading-to-v1.7.md @@ -12,7 +12,7 @@ import UpgradeMove from '/snippets/_upgrade-move.md'; ## Resources - [Changelog](https://github.com/dbt-labs/dbt-core/blob/8aaed0e29f9560bc53d9d3e88325a9597318e375/CHANGELOG.md) -- [CLI Installation guide](/docs/core/installation) +- [CLI Installation guide](/docs/core/installation-overview) - [Cloud upgrade guide](/docs/dbt-versions/upgrade-core-in-cloud) - [Release schedule](https://github.com/dbt-labs/dbt-core/issues/8260) @@ -32,6 +32,8 @@ This is a relatively small behavior change, but worth calling out in case you no - Don't add a `freshness:` block. - Explicitly set `freshness: null` +Beginning with v1.7, running [`dbt deps`](/reference/commands/deps) creates or updates the `package-lock.yml` file in the _project_root_ where `packages.yml` is recorded. The `package-lock.yml` file contains a record of all packages installed and, if subsequent `dbt deps` runs contain no updated packages in `depenedencies.yml` or `packages.yml`, dbt-core installs from `package-lock.yml`. + ## New and changed features and functionality - [`dbt docs generate`](/reference/commands/cmd-docs) now supports `--select` to generate [catalog metadata](/reference/artifacts/catalog-json) for a subset of your project. Currently available for Snowflake and Postgres only, but other adapters are coming soon. diff --git a/website/docs/docs/dbt-versions/core-upgrade/01-upgrading-to-v1.6.md b/website/docs/docs/dbt-versions/core-upgrade/01-upgrading-to-v1.6.md index d36cc544814..36146246d3a 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/01-upgrading-to-v1.6.md +++ b/website/docs/docs/dbt-versions/core-upgrade/01-upgrading-to-v1.6.md @@ -17,7 +17,7 @@ dbt Core v1.6 has three significant areas of focus: ## Resources - [Changelog](https://github.com/dbt-labs/dbt-core/blob/1.6.latest/CHANGELOG.md) -- [CLI Installation guide](/docs/core/installation) +- [CLI Installation guide](/docs/core/installation-overview - [Cloud upgrade guide](/docs/dbt-versions/upgrade-core-in-cloud) - [Release schedule](https://github.com/dbt-labs/dbt-core/issues/7481) diff --git a/website/docs/docs/dbt-versions/core-upgrade/02-upgrading-to-v1.5.md b/website/docs/docs/dbt-versions/core-upgrade/02-upgrading-to-v1.5.md index dded8a690fe..e739caa477a 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/02-upgrading-to-v1.5.md +++ b/website/docs/docs/dbt-versions/core-upgrade/02-upgrading-to-v1.5.md @@ -16,7 +16,7 @@ dbt Core v1.5 is a feature release, with two significant additions: ## Resources - [Changelog](https://github.com/dbt-labs/dbt-core/blob/1.5.latest/CHANGELOG.md) -- [CLI Installation guide](/docs/core/installation) +- [CLI Installation guide](/docs/core/installation-overview) - [Cloud upgrade guide](/docs/dbt-versions/upgrade-core-in-cloud) - [Release schedule](https://github.com/dbt-labs/dbt-core/issues/6715) diff --git a/website/docs/docs/dbt-versions/core-upgrade/04-upgrading-to-v1.4.md b/website/docs/docs/dbt-versions/core-upgrade/04-upgrading-to-v1.4.md index 6c6d96b2326..a946bdf369b 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/04-upgrading-to-v1.4.md +++ b/website/docs/docs/dbt-versions/core-upgrade/04-upgrading-to-v1.4.md @@ -12,7 +12,7 @@ import UpgradeMove from '/snippets/_upgrade-move.md'; ### Resources - [Changelog](https://github.com/dbt-labs/dbt-core/blob/1.4.latest/CHANGELOG.md) -- [CLI Installation guide](/docs/core/installation) +- [CLI Installation guide](/docs/core/installation-overview) - [Cloud upgrade guide](/docs/dbt-versions/upgrade-core-in-cloud) **Final release:** January 25, 2023 diff --git a/website/docs/docs/dbt-versions/core-upgrade/05-upgrading-to-v1.3.md b/website/docs/docs/dbt-versions/core-upgrade/05-upgrading-to-v1.3.md index f66d9bb9706..d9d97f17dc5 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/05-upgrading-to-v1.3.md +++ b/website/docs/docs/dbt-versions/core-upgrade/05-upgrading-to-v1.3.md @@ -12,7 +12,7 @@ import UpgradeMove from '/snippets/_upgrade-move.md'; ### Resources - [Changelog](https://github.com/dbt-labs/dbt-core/blob/1.3.latest/CHANGELOG.md) -- [CLI Installation guide](/docs/core/installation) +- [CLI Installation guide](/docs/core/installation-overview) - [Cloud upgrade guide](/docs/dbt-versions/upgrade-core-in-cloud) ## What to know before upgrading diff --git a/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.2.md b/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.2.md index 16825ff4e2b..72a3e0c82ad 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.2.md +++ b/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.2.md @@ -12,7 +12,7 @@ import UpgradeMove from '/snippets/_upgrade-move.md'; ### Resources - [Changelog](https://github.com/dbt-labs/dbt-core/blob/1.2.latest/CHANGELOG.md) -- [CLI Installation guide](/docs/core/installation) +- [CLI Installation guide](/docs/core/installation-overview) - [Cloud upgrade guide](/docs/dbt-versions/upgrade-core-in-cloud) ## What to know before upgrading diff --git a/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.1.md b/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.1.md index 403264a46e6..12f0f42354a 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.1.md +++ b/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.1.md @@ -12,7 +12,7 @@ import UpgradeMove from '/snippets/_upgrade-move.md'; ### Resources - [Changelog](https://github.com/dbt-labs/dbt-core/blob/1.1.latest/CHANGELOG.md) -- [CLI Installation guide](/docs/core/installation) +- [CLI Installation guide](/docs/core/installation-overview) - [Cloud upgrade guide](/docs/dbt-versions/upgrade-core-in-cloud) ## What to know before upgrading diff --git a/website/docs/docs/dbt-versions/core-upgrade/08-upgrading-to-v1.0.md b/website/docs/docs/dbt-versions/core-upgrade/08-upgrading-to-v1.0.md index 3f45e44076c..6e437638ef6 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/08-upgrading-to-v1.0.md +++ b/website/docs/docs/dbt-versions/core-upgrade/08-upgrading-to-v1.0.md @@ -13,7 +13,7 @@ import UpgradeMove from '/snippets/_upgrade-move.md'; - [Discourse](https://discourse.getdbt.com/t/3180) - [Changelog](https://github.com/dbt-labs/dbt-core/blob/1.0.latest/CHANGELOG.md) -- [CLI Installation guide](/docs/core/installation) +- [CLI Installation guide](/docs/core/installation-overview) - [Cloud upgrade guide](/docs/dbt-versions/upgrade-core-in-cloud) ## What to know before upgrading @@ -45,7 +45,7 @@ Global project macros have been reorganized, and some old unused macros have bee ### Installation - [Installation docs](/docs/supported-data-platforms) reflects adapter-specific installations -- `pip install dbt` is no longer supported, and will raise an explicit error. Install the specific adapter plugin you need as `pip install dbt-`. +- `python -m pip install dbt` is no longer supported, and will raise an explicit error. Install the specific adapter plugin you need as `python -m pip install dbt-`. - `brew install dbt` is no longer supported. Install the specific adapter plugin you need (among Postgres, Redshift, Snowflake, or BigQuery) as `brew install dbt-`. - Removed official support for python 3.6, which is reaching end of life on December 23, 2021 diff --git a/website/docs/docs/dbt-versions/core-versions.md b/website/docs/docs/dbt-versions/core-versions.md index 2467f3c946b..3ebf988c136 100644 --- a/website/docs/docs/dbt-versions/core-versions.md +++ b/website/docs/docs/dbt-versions/core-versions.md @@ -18,7 +18,7 @@ dbt Labs provides different support levels for different versions, which may inc ### Further reading - To learn how you can use dbt Core versions in dbt Cloud, see [Choosing a dbt Core version](/docs/dbt-versions/upgrade-core-in-cloud). -- To learn about installing dbt Core, see "[How to install dbt Core](/docs/core/installation)." +- To learn about installing dbt Core, see "[How to install dbt Core](/docs/core/installation-overview)." - To restrict your project to only work with a range of dbt Core versions, or use the currently running dbt Core version, see [`require-dbt-version`](/reference/project-configs/require-dbt-version) and [`dbt_version`](/reference/dbt-jinja-functions/dbt_version). ## Version support prior to v1.0 @@ -29,7 +29,7 @@ All dbt Core versions released prior to 1.0 and their version-specific documenta All dbt Core minor versions that have reached end-of-life (EOL) will have no new patch releases. This means they will no longer receive any fixes, including for known bugs that have been identified. Fixes for those bugs will instead be made in newer minor versions that are still under active support. -We recommend upgrading to a newer version in [dbt Cloud](/docs/dbt-versions/upgrade-core-in-cloud) or [dbt Core](/docs/core/installation#upgrading-dbt-core) to continue receiving support. +We recommend upgrading to a newer version in [dbt Cloud](/docs/dbt-versions/upgrade-core-in-cloud) or [dbt Core](/docs/core/installation-overview#upgrading-dbt-core) to continue receiving support. All dbt Core v1.0 and later are available in dbt Cloud until further notice. In the future, we intend to align dbt Cloud availability with dbt Core ongoing support. You will receive plenty of advance notice before any changes take place. @@ -56,7 +56,7 @@ After a minor version reaches the end of its critical support period, one year a ### Future versions -We aim to release a new minor "feature" every 3 months. _This is an indicative timeline ONLY._ For the latest information about upcoming releases, including their planned release dates and which features and fixes might be included in each, always consult the [`dbt-core` repository milestones](https://github.com/dbt-labs/dbt-core/milestones). +For the latest information about upcoming releases, including planned release dates and which features and fixes might be included, consult the [`dbt-core` repository milestones](https://github.com/dbt-labs/dbt-core/milestones) and [product roadmaps](https://github.com/dbt-labs/dbt-core/tree/main/docs/roadmap). ## Best practices for upgrading diff --git a/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/explorer-updates-rn.md b/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/explorer-updates-rn.md new file mode 100644 index 00000000000..8b829311d81 --- /dev/null +++ b/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/explorer-updates-rn.md @@ -0,0 +1,33 @@ +--- +title: "Enhancement: New features and UI changes to dbt Explorer" +description: "November 2023: New features and UI changes to dbt Explorer, including a new filter panel, improved lineage graph, and detailed resource information." +sidebar_label: "Enhancement: New features and UI changes to dbt Explorer" +sidebar_position: 08 +tags: [Nov-2023] +date: 2023-11-28 +--- + +dbt Labs is excited to announce the latest features and UI updates to dbt Explorer! + +For more details, refer to [Explore your dbt projects](/docs/collaborate/explore-projects). + +## The project's lineage graph + +- The search bar in the full lineage graph is now more prominent. +- It's easier to navigate across projects using the breadcrumbs. +- The new context menu (right click) makes it easier to focus on a node or to view its lineage. + + + +## Search improvements + +- When searching with keywords, a new side panel UI helps you filter search results by resource type, tag, column, and other key properties (instead of manually defining selectors). +- Search result logic is clearly explained. For instance, indicating whether a resource contains a column name (exact match only). + + + +## Resource details +- Model test result statuses are now displayed on the model details page. +- Column names can now be searched within the list. + + \ No newline at end of file diff --git a/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/job-notifications-rn.md b/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/job-notifications-rn.md index 660129513d7..02fe2e037df 100644 --- a/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/job-notifications-rn.md +++ b/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/job-notifications-rn.md @@ -4,6 +4,7 @@ description: "November 2023: New quality-of-life improvements for setting up and sidebar_label: "Enhancement: Job notifications" sidebar_position: 10 tags: [Nov-2023] +date: 2023-11-28 --- There are new quality-of-life improvements in dbt Cloud for email and Slack notifications about your jobs: diff --git a/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/microsoft-fabric-support-rn.md b/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/microsoft-fabric-support-rn.md new file mode 100644 index 00000000000..b416817f3a0 --- /dev/null +++ b/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/microsoft-fabric-support-rn.md @@ -0,0 +1,21 @@ +--- +title: "New: Public Preview of Microsoft Fabric support in dbt Cloud" +description: "November 2023: Public Preview now available for Microsoft Fabric in dbt Cloud" +sidebar_label: "New: Public Preview of Microsoft Fabric support" +sidebar_position: 09 +tags: [Nov-2023] +date: 2023-11-28 +--- + +Public Preview is now available in dbt Cloud for Microsoft Fabric! + +To learn more, refer to [Connect Microsoft Fabric](/docs/cloud/connect-data-platform/connect-microsoft-fabric) and [Microsoft Fabric DWH configurations](/reference/resource-configs/fabric-configs). + +Also, check out the [Quickstart for dbt Cloud and Microsoft Fabric](/guides/microsoft-fabric?step=1). The guide walks you through: + +- Loading the Jaffle Shop sample data (provided by dbt Labs) into your Microsoft Fabric warehouse. +- Connecting dbt Cloud to Microsoft Fabric. +- Turning a sample query into a model in your dbt project. A model in dbt is a SELECT statement. +- Adding tests to your models. +- Documenting your models. +- Scheduling a job to run. \ No newline at end of file diff --git a/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/repo-caching.md b/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/repo-caching.md new file mode 100644 index 00000000000..7c35991e961 --- /dev/null +++ b/website/docs/docs/dbt-versions/release-notes/02-Nov-2023/repo-caching.md @@ -0,0 +1,14 @@ +--- +title: "New: Support for Git repository caching" +description: "November 2023: dbt Cloud can cache your project's code (as well as other dbt packages) to ensure runs can begin despite an upstream Git provider's outage." +sidebar_label: "New: Support for Git repository caching" +sidebar_position: 07 +tags: [Nov-2023] +date: 2023-11-29 +--- + +Now available for dbt Cloud Enterprise plans is a new option to enable Git repository caching for your job runs. When enabled, dbt Cloud caches your dbt project's Git repository and uses the cached copy instead if there's an outage with the Git provider. This feature improves the reliability and stability of your job runs. + +To learn more, refer to [Repo caching](/docs/deploy/deploy-environments#git-repository-caching). + + \ No newline at end of file diff --git a/website/docs/docs/dbt-versions/release-notes/11-Feb-2023/feb-ide-updates.md b/website/docs/docs/dbt-versions/release-notes/11-Feb-2023/feb-ide-updates.md index d52ad2d4081..64fa2026d04 100644 --- a/website/docs/docs/dbt-versions/release-notes/11-Feb-2023/feb-ide-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/11-Feb-2023/feb-ide-updates.md @@ -13,7 +13,6 @@ Learn more about the [February changes](https://getdbt.slack.com/archives/C03SAH ## New features - Support for custom node colors in the IDE DAG visualization -- Autosave prototype is now available under feature flag. [Contact](mailto:cloud-ide-feedback@dbtlabs.com) the dbt Labs IDE team to try this out - Ref autocomplete includes models from seeds and snapshots - Prevent menus from getting cropped (git controls dropdown, file tree dropdown, build button, editor tab options) - Additional option to access the file menu by right-clicking on the files and folders in the file tree diff --git a/website/docs/docs/deploy/airgapped.md b/website/docs/docs/deploy/airgapped.md deleted file mode 100644 index a08370fef8c..00000000000 --- a/website/docs/docs/deploy/airgapped.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -id: airgapped-deployment -title: Airgapped (Beta) ---- - -:::info Airgapped - -This section provides a high level summary of the airgapped deployment type for dbt Cloud. This deployment type is currently in Beta and may not be supported in the long term. -If you’re interested in learning more about airgapped deployments for dbt Cloud, contact us at sales@getdbt.com. - -::: - -The airgapped deployment is similar to an on-premise installation in that the dbt Cloud instance will live in your network, and is subject to your security procedures, technologies, and controls. An airgapped install allows you to run dbt Cloud without any external network dependencies and is ideal for organizations that have strict rules around installing software from the cloud. - -The installation process for airgapped is a bit different. Instead of downloading and installing images during installation time, you will download all of the necessary configuration and Docker images before starting the installation process, and manage uploading these images yourself. This means that you can remove all external network dependencies and run this application in a very secure environment. - -For more information about the dbt Cloud Airgapped deployment see the below. - -- [Customer Managed Network Architecture](/docs/cloud/about-cloud/architecture) diff --git a/website/docs/docs/deploy/ci-jobs.md b/website/docs/docs/deploy/ci-jobs.md index 6114ed1ca14..149a6951fdc 100644 --- a/website/docs/docs/deploy/ci-jobs.md +++ b/website/docs/docs/deploy/ci-jobs.md @@ -15,7 +15,7 @@ dbt Labs recommends that you create your CI job in a dedicated dbt Cloud [deploy - You have a dbt Cloud account. - For the [Concurrent CI checks](/docs/deploy/continuous-integration#concurrent-ci-checks) and [Smart cancellation of stale builds](/docs/deploy/continuous-integration#smart-cancellation) features, your dbt Cloud account must be on the [Team or Enterprise plan](https://www.getdbt.com/pricing/). - You must be connected using dbt Cloud’s native Git integration with [GitHub](/docs/cloud/git/connect-github), [GitLab](/docs/cloud/git/connect-gitlab), or [Azure DevOps](/docs/cloud/git/connect-azure-devops). - - If you’re using GitLab, you must use a paid or self-hosted account which includes support for GitLab webhooks. + - With GitLab, you need a paid or self-hosted account which includes support for GitLab webhooks and [project access tokens](https://docs.gitlab.com/ee/user/project/settings/project_access_tokens.html). With GitLab Free, merge requests will invoke CI jobs but CI status updates (success or failure of the job) will not be reported back to GitLab. - If you previously configured your dbt project by providing a generic git URL that clones using SSH, you must reconfigure the project to connect through dbt Cloud's native integration. diff --git a/website/docs/docs/deploy/job-commands.md b/website/docs/docs/deploy/job-commands.md index db284c78a05..26fe1931db6 100644 --- a/website/docs/docs/deploy/job-commands.md +++ b/website/docs/docs/deploy/job-commands.md @@ -41,7 +41,7 @@ For every job, you have the option to select the [Generate docs on run](/docs/co ### Command list -You can add or remove as many [dbt commands](/reference/dbt-commands) as necessary for every job. However, you need to have at least one dbt command. There are few commands listed as "dbt Core" in the [dbt Command reference doc](/reference/dbt-commands) page. This means they are meant for use in [dbt Core](/docs/core/about-dbt-core) only and are not available in dbt Cloud. +You can add or remove as many dbt commands as necessary for every job. However, you need to have at least one dbt command. There are few commands listed as "dbt Cloud CLI" or "dbt Core" in the [dbt Command reference page](/reference/dbt-commands) page. This means they are meant for use in dbt Core or dbt Cloud CLI, and not in dbt Cloud IDE. :::tip Using selectors diff --git a/website/docs/docs/deploy/source-freshness.md b/website/docs/docs/deploy/source-freshness.md index 78500416c56..2f9fe6bc007 100644 --- a/website/docs/docs/deploy/source-freshness.md +++ b/website/docs/docs/deploy/source-freshness.md @@ -13,7 +13,7 @@ dbt Cloud provides a helpful interface around dbt's [source data freshness](/doc [`dbt build`](reference/commands/build) does _not_ include source freshness checks when building and testing resources in your DAG. Instead, you can use one of these common patterns for defining jobs: - Add `dbt build` to the run step to run models, tests, and so on. - Select the **Generate docs on run** checkbox to automatically [generate project docs](/docs/collaborate/build-and-view-your-docs#set-up-a-documentation-job). -- Select the **Run on source freshness** checkbox to enable [source freshness](#checkbox) as the first to step of the job. +- Select the **Run source freshness** checkbox to enable [source freshness](#checkbox) as the first step of the job. @@ -24,7 +24,7 @@ Review the following options and outcomes: | Options | Outcomes | |--------| ------- | | **Select checkbox ** | The **Run source freshness** checkbox in your **Execution Settings** will run `dbt source freshness` as the first step in your job and won't break subsequent steps if it fails. If you wanted your job dedicated *exclusively* to running freshness checks, you still need to include at least one placeholder step, such as `dbt compile`. | -| **Add as a run step** | Add the `dbt source freshness` command to a job anywhere in your list of run steps. However, if your source data is out of date — this step will "fail', and subsequent steps will not run. dbt Cloud will trigger email notifications (if configured) based on the end state of this step.

You can create a new job to snapshot source freshness.

If you *do not* want your models to run if your source data is out of date, then it could be a good idea to run `dbt source freshness` as the first step in your job. Otherwise, we recommend adding `dbt source freshness` as the last step in the job, or creating a separate job just for this task. | +| **Add as a run step** | Add the `dbt source freshness` command to a job anywhere in your list of run steps. However, if your source data is out of date — this step will "fail", and subsequent steps will not run. dbt Cloud will trigger email notifications (if configured) based on the end state of this step.

You can create a new job to snapshot source freshness.

If you *do not* want your models to run if your source data is out of date, then it could be a good idea to run `dbt source freshness` as the first step in your job. Otherwise, we recommend adding `dbt source freshness` as the last step in the job, or creating a separate job just for this task. | diff --git a/website/docs/docs/introduction.md b/website/docs/docs/introduction.md index 61cda6e1d3e..c575a9ae657 100644 --- a/website/docs/docs/introduction.md +++ b/website/docs/docs/introduction.md @@ -5,6 +5,7 @@ pagination_next: null pagination_prev: null --- + dbt compiles and runs your analytics code against your data platform, enabling you and your team to collaborate on a single source of truth for metrics, insights, and business definitions. This single source of truth, combined with the ability to define tests for your data, reduces errors when logic changes, and alerts you when issues arise. diff --git a/website/docs/docs/running-a-dbt-project/run-your-dbt-projects.md b/website/docs/docs/running-a-dbt-project/run-your-dbt-projects.md index b3b6ffb3e45..f1e631f0d78 100644 --- a/website/docs/docs/running-a-dbt-project/run-your-dbt-projects.md +++ b/website/docs/docs/running-a-dbt-project/run-your-dbt-projects.md @@ -11,9 +11,9 @@ You can run your dbt projects with [dbt Cloud](/docs/cloud/about-cloud/dbt-cloud - Share your [dbt project's documentation](/docs/collaborate/build-and-view-your-docs) with your team. - Integrates with the dbt Cloud IDE, allowing you to run development tasks and environment in the dbt Cloud UI for a seamless experience. - The dbt Cloud CLI to develop and run dbt commands against your dbt Cloud development environment from your local command line. - - For more details, refer to [Develop in the Cloud](/docs/cloud/about-cloud-develop). + - For more details, refer to [Develop dbt](/docs/cloud/about-develop-dbt). -- **dbt Core**: An open source project where you can develop from the [command line](/docs/core/about-dbt-core). +- **dbt Core**: An open source project where you can develop from the [command line](/docs/core/installation-overview). The dbt Cloud CLI and dbt Core are both command line tools that enable you to run dbt commands. The key distinction is the dbt Cloud CLI is tailored for dbt Cloud's infrastructure and integrates with all its [features](/docs/cloud/about-cloud/dbt-cloud-features). diff --git a/website/docs/docs/supported-data-platforms.md b/website/docs/docs/supported-data-platforms.md index c0c9a30db36..079e2018982 100644 --- a/website/docs/docs/supported-data-platforms.md +++ b/website/docs/docs/supported-data-platforms.md @@ -41,6 +41,3 @@ The following are **Trusted adapters** ✓ you can connect to in dbt Core: import AdaptersTrusted from '/snippets/_adapters-trusted.md'; - -
* Install these adapters using dbt Core as they're not currently supported in dbt Cloud.
- diff --git a/website/docs/docs/trusted-adapters.md b/website/docs/docs/trusted-adapters.md index 20d61f69575..7b7af7d0790 100644 --- a/website/docs/docs/trusted-adapters.md +++ b/website/docs/docs/trusted-adapters.md @@ -25,12 +25,12 @@ Refer to the [Build, test, document, and promote adapters](/guides/adapter-creat ### Trusted vs Verified -The Verification program exists to highlight adapters that meets both of the following criteria: +The Verification program exists to highlight adapters that meet both of the following criteria: - the guidelines given in the Trusted program, - formal agreements required for integration with dbt Cloud -For more information on the Verified Adapter program, reach out the [dbt Labs partnerships team](mailto:partnerships@dbtlabs.com) +For more information on the Verified Adapter program, reach out to the [dbt Labs partnerships team](mailto:partnerships@dbtlabs.com) ### Trusted adapters diff --git a/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md b/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md index 4f4621fa860..be02fedb230 100644 --- a/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md +++ b/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md @@ -33,6 +33,7 @@ import AvailIntegrations from '/snippets/_sl-partner-links.md'; - {frontMatter.meta.api_name} to learn how to integrate and query your metrics in downstream tools. - [dbt Semantic Layer API query syntax](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata) - [Hex dbt Semantic Layer cells](https://learn.hex.tech/docs/logic-cell-types/transform-cells/dbt-metrics-cells) to set up SQL cells in Hex. +- [Resolve 'Failed APN'](/faqs/Troubleshooting/sl-alpn-error) error when connecting to the dbt Semantic Layer. diff --git a/website/docs/docs/use-dbt-semantic-layer/gsheets.md b/website/docs/docs/use-dbt-semantic-layer/gsheets.md index cb9f4014803..9d5a7c105ae 100644 --- a/website/docs/docs/use-dbt-semantic-layer/gsheets.md +++ b/website/docs/docs/use-dbt-semantic-layer/gsheets.md @@ -54,10 +54,9 @@ To use the filter functionality, choose the [dimension](docs/build/dimensions) y - For categorical dimensiosn, type in the dimension value you want to filter by (no quotes needed) and press enter. - Continue adding additional filters as needed with AND and OR. If it's a time dimension, choose the operator and select from the calendar. - - **Limited Use Policy Disclosure** The dbt Semantic Layer for Sheet's use and transfer to any other app of information received from Google APIs will adhere to [Google API Services User Data Policy](https://developers.google.com/terms/api-services-user-data-policy), including the Limited Use requirements. - +## FAQs + diff --git a/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md b/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md index 75a853fcbe8..94f8fee007f 100644 --- a/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md +++ b/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md @@ -14,6 +14,8 @@ The dbt Semantic Layer allows you to define metrics and use various interfaces t + + ## dbt Semantic Layer components The dbt Semantic Layer includes the following components: diff --git a/website/docs/docs/use-dbt-semantic-layer/tableau.md b/website/docs/docs/use-dbt-semantic-layer/tableau.md index 8e6d7d8ed27..a5c1b6edd04 100644 --- a/website/docs/docs/use-dbt-semantic-layer/tableau.md +++ b/website/docs/docs/use-dbt-semantic-layer/tableau.md @@ -12,41 +12,50 @@ The Tableau integration with the dbt Semantic Layer is a [beta feature](/docs/db The Tableau integration allows you to use worksheets to query the Semantic Layer directly and produce your dashboards with trusted data. -This integration provides a live connection to the dbt Semantic Layer through Tableau Desktop. +This integration provides a live connection to the dbt Semantic Layer through Tableau Desktop or Tableau Server. ## Prerequisites - You have [configured the dbt Semantic Layer](/docs/use-dbt-semantic-layer/setup-sl) and are using dbt v1.6 or higher. -- You must have [Tableau Desktop](https://www.tableau.com/en-gb/products/desktop) installed with version 2021.1 or greater - - Note that Tableau Online does not currently support custom connectors natively. -- Log in to Tableau Desktop using either your license or the login details you use for Tableau Server or Tableau Online. +- You must have [Tableau Desktop](https://www.tableau.com/en-gb/products/desktop) version 2021.1 and greater or Tableau Server. + - Note that Tableau Online does not currently support custom connectors natively. If you use Tableau Online, you will only be able to access the connector in Tableau Desktop. +- Log in to Tableau Desktop (with Online or Server credentials) or a license to Tableau Server - You need your dbt Cloud host, [Environment ID](/docs/use-dbt-semantic-layer/setup-sl#set-up-dbt-semantic-layer) and [service token](/docs/dbt-cloud-apis/service-tokens) to log in. This account should be set up with the dbt Semantic Layer. - You must have a dbt Cloud Team or Enterprise [account](https://www.getdbt.com/pricing) and multi-tenant [deployment](/docs/cloud/about-cloud/regions-ip-addresses). (Single-Tenant coming soon) -## Installing +## Installing the Connector 1. Download the GitHub [connector file](https://github.com/dbt-labs/semantic-layer-tableau-connector/releases/download/v1.0.2/dbt_semantic_layer.taco) locally and add it to your default folder: - - Windows: `C:\Users\\[Windows User]\Documents\My Tableau Repository\Connectors` - - Mac: `/Users/[user]/Documents/My Tableau Repository/Connectors` - - Linux: `/opt/tableau/connectors` + +| Operating system |Tableau Desktop | Tableau Server | +| ---------------- | -------------- | -------------- | +| Windows | `C:\Users\\[Windows User]\Documents\My Tableau Repository\Connectors` | `C:\Program Files\Tableau\Connectors` | +| Mac | `/Users/[user]/Documents/My Tableau Repository/Connectors` | Not applicable | +| Linux | `/opt/tableau/connectors` | `/opt/tableau/connectors` | + 2. Install the [JDBC driver](/docs/dbt-cloud-apis/sl-jdbc) to the folder based on your operating system: - Windows: `C:\Program Files\Tableau\Drivers` - - Mac: `~/Library/Tableau/Drivers` + - Mac: `~/Library/Tableau/Drivers` or `/Library/JDBC` or `~/Library/JDBC` - Linux: ` /opt/tableau/tableau_driver/jdbc` -3. Open Tableau Desktop and find the **dbt Semantic Layer by dbt Labs** connector on the left-hand side. -4. Connect with your Host, Environment ID, and service token information that's provided to you in your dbt Cloud Semantic Layer configuration. +3. Open Tableau Desktop or Tableau Server and find the **dbt Semantic Layer by dbt Labs** connector on the left-hand side. You may need to restart these applications for the connector to be available. +4. Connect with your Host, Environment ID, and Service Token information dbt Cloud provides during [Semantic Layer configuration](/docs/use-dbt-semantic-layer/setup-sl#:~:text=After%20saving%20it%2C%20you%27ll%20be%20provided%20with%20the%20connection%20information%20that%20allows%20you%20to%20connect%20to%20downstream%20tools). + - In Tableau Server, the authentication screen may show "User" & "Password" instead, in which case the User is the Environment ID and the password is the Service Token. ## Using the integration -Once you authenticate, the system will direct you to the data source page with all the metrics and dimensions configured in your Semantic Layer. - -- From there, go directly to a worksheet in the bottom left-hand corner. -- Then, you'll find all the metrics and dimensions that are available to query on the left-hand side of your window. +1. **Authentication** — Once you authenticate, the system will direct you to the data source page with all the metrics and dimensions configured in your dbt Semantic Layer. +2. **Access worksheet** — From there, go directly to a worksheet in the bottom left-hand corner. +3. **Access metrics and dimensions** — Then, you'll find all the metrics and dimensions that are available to query on the left side of your window. Visit the [Tableau documentation](https://help.tableau.com/current/pro/desktop/en-us/gettingstarted_overview.htm) to learn more about how to use Tableau worksheets and dashboards. +### Publish from Tableau Desktop to Tableau Server + +- **From Desktop to Server** — Like any Tableau workflow, you can publish your workbook from Tableau Desktop to Tableau Server. For step-by-step instructions, visit Tableau's [publishing guide](https://help.tableau.com/current/pro/desktop/en-us/publish_workbooks_share.htm). + + ## Things to note - All metrics use the "SUM" aggregation type, and this can't be altered. The dbt Semantic Layer controls the aggregation type and it is intentionally fixed. Keep in mind that the underlying aggregation in the dbt Semantic Layer might not be "SUM" (even though "SUM" is Tableau's default). @@ -64,10 +73,12 @@ The following Tableau features aren't supported at this time, however, the dbt S - Updating the data source page - Using "Extract" mode to view your data - Unioning Tables -- Writing Custom SQL +- Writing Custom SQL / Initial SQL - Table Extensions -- Cross Database Joins +- Cross-Database Joins - All functions in Analysis --> Create Calculated Field - Filtering on a Date Part time dimension for a Cumulative metric type - Changing your date dimension to use "Week Number" +## FAQs + diff --git a/website/docs/faqs/API/_category_.yaml b/website/docs/faqs/API/_category_.yaml new file mode 100644 index 00000000000..fac67328a7a --- /dev/null +++ b/website/docs/faqs/API/_category_.yaml @@ -0,0 +1,10 @@ +# position: 2.5 # float position is supported +label: 'API' +collapsible: true # make the category collapsible +collapsed: true # keep the category collapsed by default +className: red +link: + type: generated-index + title: API FAQs +customProps: + description: Frequently asked questions about dbt APIs diff --git a/website/docs/faqs/API/rotate-token.md b/website/docs/faqs/API/rotate-token.md index a880825ea3f..144c834ea8a 100644 --- a/website/docs/faqs/API/rotate-token.md +++ b/website/docs/faqs/API/rotate-token.md @@ -7,6 +7,24 @@ id: rotate-token For security reasons and best practices, you should aim to rotate API keys every so often. +You can rotate your API key automatically with the push of a button in your dbt Cloud environment or manually using the command line. + + + + + +To automatically rotate your API key: + +1. Navigate to the Account settings by clicking the **gear icon** in the top right of your dbt Cloud account. +2. Select **API Access** from the lefthand side. +3. In the **API** pane, click `Rotate`. + + + + + + + 1. Rotate your [User API token](/docs/dbt-cloud-apis/user-tokens) by replacing `YOUR_USER_ID`, `YOUR_CURRENT_TOKEN`, and `YOUR_ACCESS_URL `with your information in the following request. ``` @@ -41,3 +59,7 @@ For example, if your deployment is Virtual Private dbt: ✅ `http://cloud.customizedurl.getdbt.com/`
❌ `http://cloud.getdbt.com/`
+ +
+ +
\ No newline at end of file diff --git a/website/docs/faqs/Accounts/_category_.yaml b/website/docs/faqs/Accounts/_category_.yaml new file mode 100644 index 00000000000..b8ebee5fe2a --- /dev/null +++ b/website/docs/faqs/Accounts/_category_.yaml @@ -0,0 +1,10 @@ +# position: 2.5 # float position is supported +label: 'Accounts' +collapsible: true # make the category collapsible +collapsed: true # keep the category collapsed by default +className: red +link: + type: generated-index + title: Account FAQs +customProps: + description: Frequently asked questions about your account in dbt diff --git a/website/docs/faqs/Core/_category_.yaml b/website/docs/faqs/Core/_category_.yaml new file mode 100644 index 00000000000..bac4ad4a655 --- /dev/null +++ b/website/docs/faqs/Core/_category_.yaml @@ -0,0 +1,10 @@ +# position: 2.5 # float position is supported +label: 'dbt Core' +collapsible: true # make the category collapsible +collapsed: true # keep the category collapsed by default +className: red +link: + type: generated-index + title: 'dbt Core FAQs' +customProps: + description: Frequently asked questions about dbt Core diff --git a/website/docs/faqs/Core/install-pip-best-practices.md b/website/docs/faqs/Core/install-pip-best-practices.md index e36d58296ec..72360a52acc 100644 --- a/website/docs/faqs/Core/install-pip-best-practices.md +++ b/website/docs/faqs/Core/install-pip-best-practices.md @@ -30,6 +30,6 @@ Before installing dbt, make sure you have the latest versions: ```shell -pip install --upgrade pip wheel setuptools +python -m pip install --upgrade pip wheel setuptools ``` diff --git a/website/docs/faqs/Core/install-pip-os-prereqs.md b/website/docs/faqs/Core/install-pip-os-prereqs.md index 41a4e4ec60e..1eb6205512a 100644 --- a/website/docs/faqs/Core/install-pip-os-prereqs.md +++ b/website/docs/faqs/Core/install-pip-os-prereqs.md @@ -57,7 +57,7 @@ pip install cryptography~=3.4 ``` -#### Windows +### Windows Windows requires Python and git to successfully install and run dbt Core. diff --git a/website/docs/faqs/Docs/_category_.yaml b/website/docs/faqs/Docs/_category_.yaml new file mode 100644 index 00000000000..8c7925dcc15 --- /dev/null +++ b/website/docs/faqs/Docs/_category_.yaml @@ -0,0 +1,10 @@ +# position: 2.5 # float position is supported +label: 'dbt Docs' +collapsible: true # make the category collapsible +collapsed: true # keep the category collapsed by default +className: red +link: + type: generated-index + title: dbt Docs FAQs +customProps: + description: Frequently asked questions about dbt Docs diff --git a/website/docs/faqs/Environments/_category_.yaml b/website/docs/faqs/Environments/_category_.yaml new file mode 100644 index 00000000000..8d252d2c5d3 --- /dev/null +++ b/website/docs/faqs/Environments/_category_.yaml @@ -0,0 +1,10 @@ +# position: 2.5 # float position is supported +label: 'Environments' +collapsible: true # make the category collapsible +collapsed: true # keep the category collapsed by default +className: red +link: + type: generated-index + title: 'Environments FAQs' +customProps: + description: Frequently asked questions about Environments in dbt diff --git a/website/docs/faqs/Git/_category_.yaml b/website/docs/faqs/Git/_category_.yaml new file mode 100644 index 00000000000..0d9e5ee6e91 --- /dev/null +++ b/website/docs/faqs/Git/_category_.yaml @@ -0,0 +1,10 @@ +# position: 2.5 # float position is supported +label: 'Git' +collapsible: true # make the category collapsible +collapsed: true # keep the category collapsed by default +className: red +link: + type: generated-index + title: Git FAQs +customProps: + description: Frequently asked questions about Git and dbt diff --git a/website/docs/faqs/Jinja/_category_.yaml b/website/docs/faqs/Jinja/_category_.yaml new file mode 100644 index 00000000000..809ca0bb8eb --- /dev/null +++ b/website/docs/faqs/Jinja/_category_.yaml @@ -0,0 +1,10 @@ +# position: 2.5 # float position is supported +label: 'Jinja' +collapsible: true # make the category collapsible +collapsed: true # keep the category collapsed by default +className: red +link: + type: generated-index + title: Jinja FAQs +customProps: + description: Frequently asked questions about Jinja and dbt diff --git a/website/docs/faqs/Models/_category_.yaml b/website/docs/faqs/Models/_category_.yaml new file mode 100644 index 00000000000..7398058db2b --- /dev/null +++ b/website/docs/faqs/Models/_category_.yaml @@ -0,0 +1,10 @@ +# position: 2.5 # float position is supported +label: 'Models' +collapsible: true # make the category collapsible +collapsed: true # keep the category collapsed by default +className: red +link: + type: generated-index + title: Models FAQs +customProps: + description: Frequently asked questions about Models in dbt diff --git a/website/docs/faqs/Project/_category_.yaml b/website/docs/faqs/Project/_category_.yaml new file mode 100644 index 00000000000..d2f695773f8 --- /dev/null +++ b/website/docs/faqs/Project/_category_.yaml @@ -0,0 +1,10 @@ +# position: 2.5 # float position is supported +label: 'Projects' +collapsible: true # make the category collapsible +collapsed: true # keep the category collapsed by default +className: red +link: + type: generated-index + title: Project FAQs +customProps: + description: Frequently asked questions about projects in dbt diff --git a/website/docs/faqs/Runs/_category_.yaml b/website/docs/faqs/Runs/_category_.yaml new file mode 100644 index 00000000000..5867a0d3710 --- /dev/null +++ b/website/docs/faqs/Runs/_category_.yaml @@ -0,0 +1,10 @@ +# position: 2.5 # float position is supported +label: 'Runs' +collapsible: true # make the category collapsible +collapsed: true # keep the category collapsed by default +className: red +link: + type: generated-index + title: Runs FAQs +customProps: + description: Frequently asked questions about runs in dbt diff --git a/website/docs/faqs/Seeds/_category_.yaml b/website/docs/faqs/Seeds/_category_.yaml new file mode 100644 index 00000000000..fd2f7d3d925 --- /dev/null +++ b/website/docs/faqs/Seeds/_category_.yaml @@ -0,0 +1,10 @@ +# position: 2.5 # float position is supported +label: 'Seeds' +collapsible: true # make the category collapsible +collapsed: true # keep the category collapsed by default +className: red +link: + type: generated-index + title: Seeds FAQs +customProps: + description: Frequently asked questions about seeds in dbt diff --git a/website/docs/faqs/Snapshots/_category_.yaml b/website/docs/faqs/Snapshots/_category_.yaml new file mode 100644 index 00000000000..743b508fefe --- /dev/null +++ b/website/docs/faqs/Snapshots/_category_.yaml @@ -0,0 +1,10 @@ +# position: 2.5 # float position is supported +label: 'Snapshots' +collapsible: true # make the category collapsible +collapsed: true # keep the category collapsed by default +className: red +link: + type: generated-index + title: Snapshots FAQs +customProps: + description: Frequently asked questions about snapshots in dbt diff --git a/website/docs/faqs/Tests/_category_.yaml b/website/docs/faqs/Tests/_category_.yaml new file mode 100644 index 00000000000..754b8ec267b --- /dev/null +++ b/website/docs/faqs/Tests/_category_.yaml @@ -0,0 +1,10 @@ +# position: 2.5 # float position is supported +label: 'Tests' +collapsible: true # make the category collapsible +collapsed: true # keep the category collapsed by default +className: red +link: + type: generated-index + title: Tests FAQs +customProps: + description: Frequently asked questions about tests in dbt diff --git a/website/docs/faqs/Troubleshooting/_category_.yaml b/website/docs/faqs/Troubleshooting/_category_.yaml new file mode 100644 index 00000000000..14c4b49044d --- /dev/null +++ b/website/docs/faqs/Troubleshooting/_category_.yaml @@ -0,0 +1,10 @@ +# position: 2.5 # float position is supported +label: 'Troubleshooting' +collapsible: true # make the category collapsible +collapsed: true # keep the category collapsed by default +className: red +link: + type: generated-index + title: Troubleshooting FAQs +customProps: + description: Frequently asked questions about troubleshooting dbt diff --git a/website/docs/faqs/Troubleshooting/ip-restrictions.md b/website/docs/faqs/Troubleshooting/ip-restrictions.md new file mode 100644 index 00000000000..9f1aa41c574 --- /dev/null +++ b/website/docs/faqs/Troubleshooting/ip-restrictions.md @@ -0,0 +1,29 @@ +--- +title: "I'm receiving a 403 error 'Forbidden: Access denied' when using service tokens" +description: "All service token traffic is now subject to IP restrictions. To resolve 403 errors, add your third-party integration CIDRs (network addresses) to the allowlist." +sidebar_label: 'Service token 403 error: Forbidden: Access denied' +--- + + +All [service token](/docs/dbt-cloud-apis/service-tokens) traffic is subject to IP restrictions. + +When using a service token, the following 403 response error indicates the IP is not on the allowlist. To resolve this, you should add your third-party integration CIDRs (network addresses) to your allowlist. + +The following is an example of the 403 response error: + +```json + { + "status": { + "code": 403, + "is_success": False, + "user_message": ("Forbidden: Access denied"), + "developer_message": None, + }, + "data": { + "account_id": , + "user_id": , + "is_service_token": , + "account_access_denied": True, + }, + } +``` diff --git a/website/docs/faqs/Troubleshooting/sl-alpn-error.md b/website/docs/faqs/Troubleshooting/sl-alpn-error.md new file mode 100644 index 00000000000..f588d690fac --- /dev/null +++ b/website/docs/faqs/Troubleshooting/sl-alpn-error.md @@ -0,0 +1,14 @@ +--- +title: I'm receiving an `Failed ALPN` error when trying to connect to the dbt Semantic Layer. +description: "To resolve the 'Failed ALPN' error in the dbt Semantic Layer, create a SSL interception exception for the dbt Cloud domain." +sidebar_label: 'Use SSL exception to resolve `Failed ALPN` error' +--- + +If you're receiving a `Failed ALPN` error when trying to connect the dbt Semantic Layer with the various [data integration tools](/docs/use-dbt-semantic-layer/avail-sl-integrations) (such as Tableau, DBeaver, Datagrip, ADBC, or JDBC), it typically happens when connecting from a computer behind a corporate VPN or Proxy (like Zscaler or Check Point). + +The root cause is typically the proxy interfering with the TLS handshake as the dbt Semantic Layer uses gRPC/HTTP2 for connectivity. To resolve this: + +- If your proxy supports gRPC/HTTP2 but isn't configured to allow ALPN, adjust its settings accordingly to allow ALPN. Or create an exception for the dbt Cloud domain. +- If your proxy does not support gRPC/HTTP2, add an SSL interception exception for the dbt Cloud domain in your proxy settings + +This should help in successfully establishing the connection without the Failed ALPN error. diff --git a/website/docs/faqs/Warehouse/_category_.yaml b/website/docs/faqs/Warehouse/_category_.yaml new file mode 100644 index 00000000000..4de6e2e7d5e --- /dev/null +++ b/website/docs/faqs/Warehouse/_category_.yaml @@ -0,0 +1,10 @@ +# position: 2.5 # float position is supported +label: 'Warehouse' +collapsible: true # make the category collapsible +collapsed: true # keep the category collapsed by default +className: red +link: + type: generated-index + title: Warehouse FAQs +customProps: + description: Frequently asked questions about warehouses and dbt diff --git a/website/docs/guides/adapter-creation.md b/website/docs/guides/adapter-creation.md index 8a9145f0258..8bf082b04a0 100644 --- a/website/docs/guides/adapter-creation.md +++ b/website/docs/guides/adapter-creation.md @@ -799,7 +799,7 @@ dbt-tests-adapter ```sh -pip install -r dev_requirements.txt +python -m pip install -r dev_requirements.txt ``` ### Set up and configure pytest @@ -1108,7 +1108,7 @@ The following subjects need to be addressed across three pages of this docs site | How To... | File to change within `/website/docs/` | Action | Info to Include | |----------------------|--------------------------------------------------------------|--------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Connect | `/docs/core/connect-data-platform/{MY-DATA-PLATFORM}-setup.md` | Create | Give all information needed to define a target in `~/.dbt/profiles.yml` and get `dbt debug` to connect to the database successfully. All possible configurations should be mentioned. | +| Connect | `/docs/core/connect-data-platform/{MY-DATA-PLATFORM}-setup.md` | Create | Give all information needed to define a target in `~/.dbt/profiles.yml` and get `dbt debug` to connect to the database successfully. All possible configurations should be mentioned. | | Configure | `reference/resource-configs/{MY-DATA-PLATFORM}-configs.md` | Create | What options and configuration specific to your data platform do users need to know? e.g. table distribution and indexing options, column_quoting policy, which incremental strategies are supported | | Discover and Install | `docs/supported-data-platforms.md` | Modify | Is it a vendor- or community- supported adapter? How to install Python adapter package? Ideally with pip and PyPI hosted package, but can also use `git+` link to GitHub Repo | | Add link to sidebar | `website/sidebars.js` | Modify | Add the document id to the correct location in the sidebar menu | @@ -1123,6 +1123,14 @@ Below are some recent pull requests made by partners to document their data plat - [SingleStore](https://github.com/dbt-labs/docs.getdbt.com/pull/1044) - [Firebolt](https://github.com/dbt-labs/docs.getdbt.com/pull/941) +Note — Use the following re-usable component to auto-fill the frontmatter content on your new page: + +```markdown +import SetUpPages from '/snippets/_setup-pages-intro.md'; + + +``` + ## Promote a new adapter The most important thing here is recognizing that people are successful in the community when they join, first and foremost, to engage authentically. diff --git a/website/docs/guides/codespace-qs.md b/website/docs/guides/codespace-qs.md index 7712ed8f8e8..b28b0ddaacf 100644 --- a/website/docs/guides/codespace-qs.md +++ b/website/docs/guides/codespace-qs.md @@ -61,7 +61,7 @@ If you'd like to work with a larger selection of Jaffle Shop data, you can gener 1. Install the Python package called [jafgen](https://pypi.org/project/jafgen/). At the terminal's prompt, run: ```shell - /workspaces/test (main) $ pip install jafgen + /workspaces/test (main) $ python -m pip install jafgen ``` 1. When installation is done, run: diff --git a/website/docs/guides/create-new-materializations.md b/website/docs/guides/create-new-materializations.md index 1ad7d202de6..af2732c0c39 100644 --- a/website/docs/guides/create-new-materializations.md +++ b/website/docs/guides/create-new-materializations.md @@ -7,7 +7,6 @@ hoverSnippet: Learn how to create your own materializations. # time_to_complete: '30 minutes' commenting out until we test icon: 'guides' hide_table_of_contents: true -tags: ['dbt Core'] level: 'Advanced' recently_updated: true --- diff --git a/website/docs/guides/custom-cicd-pipelines.md b/website/docs/guides/custom-cicd-pipelines.md index 672c6e6dab8..bd6d7617623 100644 --- a/website/docs/guides/custom-cicd-pipelines.md +++ b/website/docs/guides/custom-cicd-pipelines.md @@ -336,7 +336,7 @@ lint-project: rules: - if: $CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH != 'main' script: - - pip install sqlfluff==0.13.1 + - python -m pip install sqlfluff==0.13.1 - sqlfluff lint models --dialect snowflake --rules L019,L020,L021,L022 # this job calls the dbt Cloud API to run a job @@ -379,7 +379,7 @@ steps: displayName: 'Use Python 3.7' - script: | - pip install requests + python -m pip install requests displayName: 'Install python dependencies' - script: | @@ -434,7 +434,7 @@ pipelines: - step: name: Lint dbt project script: - - pip install sqlfluff==0.13.1 + - python -m pip install sqlfluff==0.13.1 - sqlfluff lint models --dialect snowflake --rules L019,L020,L021,L022 'main': # override if your default branch doesn't run on a branch named "main" diff --git a/website/docs/guides/dremio-lakehouse.md b/website/docs/guides/dremio-lakehouse.md new file mode 100644 index 00000000000..378ec857f6a --- /dev/null +++ b/website/docs/guides/dremio-lakehouse.md @@ -0,0 +1,196 @@ +--- +title: Build a data lakehouse with dbt Core and Dremio Cloud +id: build-dremio-lakehouse +description: Learn how to build a data lakehouse with dbt Core and Dremio Cloud. +displayText: Build a data lakehouse with dbt Core and Dremio Cloud +hoverSnippet: Learn how to build a data lakehouse with dbt Core and Dremio Cloud +# time_to_complete: '30 minutes' commenting out until we test +platform: 'dbt-core' +icon: 'guides' +hide_table_of_contents: true +tags: ['Dremio', 'dbt Core'] +level: 'Intermediate' +recently_updated: true +--- +## Introduction + +This guide will demonstrate how to build a data lakehouse with dbt Core 1.5 or newer and Dremio Cloud. You can simplify and optimize your data infrastructure with dbt's robust transformation framework and Dremio’s open and easy data lakehouse. The integrated solution empowers companies to establish a strong data and analytics foundation, fostering self-service analytics and enhancing business insights while simplifying operations by eliminating the necessity to write complex Extract, Transform, and Load (ETL) pipelines. + +### Prerequisites + +* You must have a [Dremio Cloud](https://docs.dremio.com/cloud/) account. +* You must have Python 3 installed. +* You must have dbt Core v1.5 or newer [installed](//docs/core/installation-overview). +* You must have the Dremio adapter 1.5.0 or newer [installed and configured](/docs/core/connect-data-platform/dremio-setup) for Dremio Cloud. +* You must have basic working knowledge of Git and the command line interface (CLI). + +## Validate your environment + +Validate your environment by running the following commands in your CLI and verifying the results: + +```shell + +$ python3 --version +Python 3.11.4 # Must be Python 3 + +``` + +```shell + +$ dbt --version +Core: + - installed: 1.5.0 # Must be 1.5 or newer + - latest: 1.6.3 - Update available! + + Your version of dbt-core is out of date! + You can find instructions for upgrading here: + https://docs.getdbt.com/docs/installation + +Plugins: + - dremio: 1.5.0 - Up to date! # Must be 1.5 or newer + +``` + +## Getting started + +1. Clone the Dremio dbt Core sample project from the [GitHub repo](https://github.com/dremio-brock/DremioDBTSample/tree/master/dremioSamples). + +2. In your integrated development environment (IDE), open the relation.py file in the Dremio adapter directory: + `$HOME/Library/Python/3.9/lib/python/site-packages/dbt/adapters/dremio/relation.py` + +3. Find and update lines 51 and 52 to match the following syntax: + +```python + +PATTERN = re.compile(r"""((?:[^."']|"[^"]*"|'[^']*')+)""") +return ".".join(PATTERN.split(identifier)[1::2]) + +``` + +The complete selection should look like this: + +```python +def quoted_by_component(self, identifier, componentName): + if componentName == ComponentName.Schema: + PATTERN = re.compile(r"""((?:[^."']|"[^"]*"|'[^']*')+)""") + return ".".join(PATTERN.split(identifier)[1::2]) + else: + return self.quoted(identifier) + +``` + +You need to update this pattern because the plugin doesn’t support schema names in Dremio containing dots and spaces. + +## Build your pipeline + +1. Create a `profiles.yml` file in the `$HOME/.dbt/profiles.yml` path and add the following configs: + +```yaml + +dremioSamples: + outputs: + cloud_dev: + dremio_space: dev + dremio_space_folder: no_schema + object_storage_path: dev + object_storage_source: $scratch + pat: + cloud_host: api.dremio.cloud + cloud_project_id: + threads: 1 + type: dremio + use_ssl: true + user: + target: dev + + ``` + + 2. Execute the transformation pipeline: + + ```shell + + $ dbt run -t cloud_dev + + ``` + + If the above configurations have been implemented, the output will look something like this: + +```shell + +17:24:16 Running with dbt=1.5.0 +17:24:17 Found 5 models, 0 tests, 0 snapshots, 0 analyses, 348 macros, 0 operations, 0 seed files, 2 sources, 0 exposures, 0 metrics, 0 groups +17:24:17 +17:24:29 Concurrency: 1 threads (target='cloud_dev') +17:24:29 +17:24:29 1 of 5 START sql view model Preparation.trips .................................. [RUN] +17:24:31 1 of 5 OK created sql view model Preparation. trips ............................. [OK in 2.61s] +17:24:31 2 of 5 START sql view model Preparation.weather ................................ [RUN] +17:24:34 2 of 5 OK created sql view model Preparation.weather ........................... [OK in 2.15s] +17:24:34 3 of 5 START sql view model Business.Transportation.nyc_trips .................. [RUN] +17:24:36 3 of 5 OK created sql view model Business.Transportation.nyc_trips ............. [OK in 2.18s] +17:24:36 4 of 5 START sql view model Business.Weather.nyc_weather ....................... [RUN] +17:24:38 4 of 5 OK created sql view model Business.Weather.nyc_weather .................. [OK in 2.09s] +17:24:38 5 of 5 START sql view model Application.nyc_trips_with_weather ................. [RUN] +17:24:41 5 of 5 OK created sql view model Application.nyc_trips_with_weather ............ [OK in 2.74s] +17:24:41 +17:24:41 Finished running 5 view models in 0 hours 0 minutes and 24.03 seconds (24.03s). +17:24:41 +17:24:41 Completed successfully +17:24:41 +17:24:41 Done. PASS=5 WARN=0 ERROR=0 SKIP=0 TOTAL=5 + +``` + +Now that you have a running environment and a completed job, you can view the data in Dremio and expand your code. This is a snapshot of the project structure in an IDE: + + + +## About the schema.yml + +The `schema.yml` file defines Dremio sources and models to be used and what data models are in scope. In this guides sample project, there are two data sources: + +1. The `NYC-weather.csv` stored in the **Samples** database and +2. The `sample_data` from the **Samples database**. + +The models correspond to both weather and trip data respectively and will be joined for analysis. + +The sources can be found by navigating to the **Object Storage** section of the Dremio Cloud UI. + + + +## About the models + +**Preparation** — `preparation_trips.sql` and `preparation_weather.sql` are building views on top of the trips and weather data. + +**Business** — `business_transportation_nyc_trips.sql` applies some level of transformation on `preparation_trips.sql` view. `Business_weather_nyc.sql` has no transformation on the `preparation_weather.sql` view. + +**Application** — `application_nyc_trips_with_weather.sql` joins the output from the Business model. This is what your business users will consume. + +## The Job output + +When you run the dbt job, it will create a **dev** space folder that has all the data assets created. This is what you will see in Dremio Cloud UI. Spaces in Dremio is a way to organize data assets which map to business units or data products. + + + +Open the **Application folder** and you will see the output of the simple transformation we did using dbt. + + + +## Query the data + +Now that you have run the job and completed the transformation, it's time to query your data. Click on the `nyc_trips_with_weather` view. That will take you to the SQL Runner page. Click **Show SQL Pane** on the upper right corner of the page. + +Run the following query: + +```sql + +SELECT vendor_id, + AVG(tip_amount) +FROM dev.application."nyc_treips_with_weather" +GROUP BY vendor_id + +``` + + + +This completes the integration setup and data is ready for business consumption. diff --git a/website/docs/guides/manual-install-qs.md b/website/docs/guides/manual-install-qs.md index 61796fe008a..c74d30db51c 100644 --- a/website/docs/guides/manual-install-qs.md +++ b/website/docs/guides/manual-install-qs.md @@ -15,7 +15,7 @@ When you use dbt Core to work with dbt, you will be editing files locally using ### Prerequisites * To use dbt Core, it's important that you know some basics of the Terminal. In particular, you should understand `cd`, `ls` and `pwd` to navigate through the directory structure of your computer easily. -* Install dbt Core using the [installation instructions](/docs/core/installation) for your operating system. +* Install dbt Core using the [installation instructions](/docs/core/installation-overview) for your operating system. * Complete [Setting up (in BigQuery)](/guides/bigquery?step=2) and [Loading data (BigQuery)](/guides/bigquery?step=3). * [Create a GitHub account](https://github.com/join) if you don't already have one. diff --git a/website/docs/guides/microsoft-fabric-qs.md b/website/docs/guides/microsoft-fabric-qs.md new file mode 100644 index 00000000000..1d1e016a6f1 --- /dev/null +++ b/website/docs/guides/microsoft-fabric-qs.md @@ -0,0 +1,317 @@ +--- +title: "Quickstart for dbt Cloud and Microsoft Fabric" +id: "microsoft-fabric" +level: 'Beginner' +icon: 'fabric' +hide_table_of_contents: true +tags: ['dbt Cloud','Quickstart'] +recently_updated: true +--- +## Introduction + +In this quickstart guide, you'll learn how to use dbt Cloud with [Microsoft Fabric](https://www.microsoft.com/en-us/microsoft-fabric). It will show you how to: + +- Load the Jaffle Shop sample data (provided by dbt Labs) into your Microsoft Fabric warehouse. +- Connect dbt Cloud to Microsoft Fabric. +- Turn a sample query into a model in your dbt project. A model in dbt is a SELECT statement. +- Add tests to your models. +- Document your models. +- Schedule a job to run. + +:::tip Public preview + +A public preview of Microsoft Fabric in dbt Cloud is now available! + +::: + +### Prerequisites +- You have a [dbt Cloud](https://www.getdbt.com/signup/) account. +- You have started the Microsoft Fabric (Preview) trial. For details, refer to [Microsoft Fabric (Preview) trial](https://learn.microsoft.com/en-us/fabric/get-started/fabric-trial) in the Microsoft docs. +- As a Microsoft admin, you’ve enabled service principal authentication. You must add the service principal to the Microsoft Fabric workspace with either a Member (recommended) or Admin permission set. For details, refer to [Enable service principal authentication](https://learn.microsoft.com/en-us/fabric/admin/metadata-scanning-enable-read-only-apis) in the Microsoft docs. dbt Cloud needs these authentication credentials to connect to Microsoft Fabric. + +### Related content +- [dbt Courses](https://courses.getdbt.com/collections) +- [About continuous integration jobs](/docs/deploy/continuous-integration) +- [Deploy jobs](/docs/deploy/deploy-jobs) +- [Job notifications](/docs/deploy/job-notifications) +- [Source freshness](/docs/deploy/source-freshness) + +## Load data into your Microsoft Fabric warehouse + +1. Log in to your [Microsoft Fabric](http://app.fabric.microsoft.com) account. +2. On the home page, select the **Synapse Data Warehouse** tile. + + + +3. From **Workspaces** on the left sidebar, navigate to your organization’s workspace. Or, you can create a new workspace; refer to [Create a workspace](https://learn.microsoft.com/en-us/fabric/get-started/create-workspaces) in the Microsoft docs for more details. +4. Choose your warehouse from the table. Or, you can create a new warehouse; refer to [Create a warehouse](https://learn.microsoft.com/en-us/fabric/data-warehouse/tutorial-create-warehouse) in the Microsoft docs for more details. +5. Open the SQL editor by selecting **New SQL query** from the top bar. +6. Copy these statements into the SQL editor to load the Jaffle Shop example data: + + ```sql + DROP TABLE dbo.customers; + + CREATE TABLE dbo.customers + ( + [ID] [int], + \[FIRST_NAME] [varchar](8000), + \[LAST_NAME] [varchar](8000) + ); + + COPY INTO [dbo].[customers] + FROM 'https://dbtlabsynapsedatalake.blob.core.windows.net/dbt-quickstart-public/jaffle_shop_customers.parquet' + WITH ( + FILE_TYPE = 'PARQUET' + ); + + DROP TABLE dbo.orders; + + CREATE TABLE dbo.orders + ( + [ID] [int], + [USER_ID] [int], + -- [ORDER_DATE] [int], + [ORDER_DATE] [date], + \[STATUS] [varchar](8000) + ); + + COPY INTO [dbo].[orders] + FROM 'https://dbtlabsynapsedatalake.blob.core.windows.net/dbt-quickstart-public/jaffle_shop_orders.parquet' + WITH ( + FILE_TYPE = 'PARQUET' + ); + + DROP TABLE dbo.payments; + + CREATE TABLE dbo.payments + ( + [ID] [int], + [ORDERID] [int], + \[PAYMENTMETHOD] [varchar](8000), + \[STATUS] [varchar](8000), + [AMOUNT] [int], + [CREATED] [date] + ); + + COPY INTO [dbo].[payments] + FROM 'https://dbtlabsynapsedatalake.blob.core.windows.net/dbt-quickstart-public/stripe_payments.parquet' + WITH ( + FILE_TYPE = 'PARQUET' + ); + ``` + + + +## Connect dbt Cloud to Microsoft Fabric + +1. Create a new project in dbt Cloud. From **Account settings** (using the gear menu in the top right corner), click **+ New Project**. +2. Enter a project name and click **Continue**. +3. Choose **Fabric** as your connection and click **Next**. +4. In the **Configure your environment** section, enter the **Settings** for your new project: + - **Server** — Use the service principal's **host** value for the Fabric test endpoint. + - **Port** — 1433 (which is the default). + - **Database** — Use the service principal's **database** value for the Fabric test endpoint. +5. Enter the **Development credentials** for your new project: + - **Authentication** — Choose **Service Principal** from the dropdown. + - **Tenant ID** — Use the service principal’s **Directory (tenant) id** as the value. + - **Client ID** — Use the service principal’s **application (client) ID id** as the value. + - **Client secret** — Use the service principal’s **client secret** (not the **client secret id**) as the value. +6. Click **Test connection**. This verifies that dbt Cloud can access your Microsoft Fabric account. +7. Click **Next** when the test succeeds. If it failed, you might need to check your Microsoft service principal. + +## Set up a dbt Cloud managed repository + + +## Initialize your dbt project​ and start developing +Now that you have a repository configured, you can initialize your project and start development in dbt Cloud: + +1. Click **Start developing in the IDE**. It might take a few minutes for your project to spin up for the first time as it establishes your git connection, clones your repo, and tests the connection to the warehouse. +2. Above the file tree to the left, click **Initialize dbt project**. This builds out your folder structure with example models. +3. Make your initial commit by clicking **Commit and sync**. Use the commit message `initial commit` and click **Commit**. This creates the first commit to your managed repo and allows you to open a branch where you can add new dbt code. +4. You can now directly query data from your warehouse and execute `dbt run`. You can try this out now: + - In the command line bar at the bottom, enter `dbt run` and click **Enter**. You should see a `dbt run succeeded` message. + +## Build your first model +1. Under **Version Control** on the left, click **Create branch**. You can name it `add-customers-model`. You need to create a new branch since the main branch is set to read-only mode. +1. Click the **...** next to the `models` directory, then select **Create file**. +1. Name the file `customers.sql`, then click **Create**. +1. Copy the following query into the file and click **Save**. + + + + ```sql + with customers as ( + + select + ID as customer_id, + FIRST_NAME as first_name, + LAST_NAME as last_name + + from dbo.customers + ), + + orders as ( + + select + ID as order_id, + USER_ID as customer_id, + ORDER_DATE as order_date, + STATUS as status + + from dbo.orders + ), + + customer_orders as ( + + select + customer_id, + + min(order_date) as first_order_date, + max(order_date) as most_recent_order_date, + count(order_id) as number_of_orders + + from orders + + group by customer_id + ), + + final as ( + + select + customers.customer_id, + customers.first_name, + customers.last_name, + customer_orders.first_order_date, + customer_orders.most_recent_order_date, + coalesce(customer_orders.number_of_orders, 0) as number_of_orders + + from customers + + left join customer_orders on customers.customer_id = customer_orders.customer_id + ) + + select * from final + ``` + + +1. Enter `dbt run` in the command prompt at the bottom of the screen. You should get a successful run and see the three models. + +Later, you can connect your business intelligence (BI) tools to these views and tables so they only read cleaned up data rather than raw data in your BI tool. + +#### FAQs + + + + + + + +## Change the way your model is materialized + + + +## Delete the example models + + + +## Build models on top of other models + + + +1. Create a new SQL file, `models/stg_customers.sql`, with the SQL from the `customers` CTE in our original query. +2. Create a second new SQL file, `models/stg_orders.sql`, with the SQL from the `orders` CTE in our original query. + + + + ```sql + select + ID as customer_id, + FIRST_NAME as first_name, + LAST_NAME as last_name + + from dbo.customers + ``` + + + + + + ```sql + select + ID as order_id, + USER_ID as customer_id, + ORDER_DATE as order_date, + STATUS as status + + from dbo.orders + ``` + + + +3. Edit the SQL in your `models/customers.sql` file as follows: + + + + ```sql + with customers as ( + + select * from {{ ref('stg_customers') }} + + ), + + orders as ( + + select * from {{ ref('stg_orders') }} + + ), + + customer_orders as ( + + select + customer_id, + + min(order_date) as first_order_date, + max(order_date) as most_recent_order_date, + count(order_id) as number_of_orders + + from orders + + group by customer_id + + ), + + final as ( + + select + customers.customer_id, + customers.first_name, + customers.last_name, + customer_orders.first_order_date, + customer_orders.most_recent_order_date, + coalesce(customer_orders.number_of_orders, 0) as number_of_orders + + from customers + + left join customer_orders on customers.customer_id = customer_orders.customer_id + + ) + + select * from final + + ``` + + + +4. Execute `dbt run`. + + This time, when you performed a `dbt run`, separate views/tables were created for `stg_customers`, `stg_orders` and `customers`. dbt inferred the order to run these models. Because `customers` depends on `stg_customers` and `stg_orders`, dbt builds `customers` last. You do not need to explicitly define these dependencies. + +#### FAQs {#faq-2} + + + + + + + + \ No newline at end of file diff --git a/website/docs/guides/redshift-qs.md b/website/docs/guides/redshift-qs.md index 9296e6c6568..890be27e50a 100644 --- a/website/docs/guides/redshift-qs.md +++ b/website/docs/guides/redshift-qs.md @@ -57,7 +57,7 @@ You can check out [dbt Fundamentals](https://courses.getdbt.com/courses/fundamen -7. You might be asked to Configure account. For the purpose of this sandbox environment, we recommend selecting “Configure account”. +7. You might be asked to Configure account. For this sandbox environment, we recommend selecting “Configure account”. 8. Select your cluster from the list. In the **Connect to** popup, fill out the credentials from the output of the stack: - **Authentication** — Use the default which is **Database user name and password** (NOTE: IAM authentication is not supported in dbt Cloud). @@ -82,8 +82,7 @@ Now we are going to load our sample data into the S3 bucket that our Cloudformat 2. Now we are going to use the S3 bucket that you created with CloudFormation and upload the files. Go to the search bar at the top and type in `S3` and click on S3. There will be sample data in the bucket already, feel free to ignore it or use it for other modeling exploration. The bucket will be prefixed with `dbt-data-lake`. - - + 3. Click on the `name of the bucket` S3 bucket. If you have multiple S3 buckets, this will be the bucket that was listed under “Workshopbucket” on the Outputs page. diff --git a/website/docs/guides/set-up-ci.md b/website/docs/guides/set-up-ci.md index 83362094ec6..89d7c5a14fa 100644 --- a/website/docs/guides/set-up-ci.md +++ b/website/docs/guides/set-up-ci.md @@ -167,7 +167,7 @@ jobs: with: python-version: "3.9" - name: Install SQLFluff - run: "pip install sqlfluff" + run: "python -m pip install sqlfluff" - name: Lint project run: "sqlfluff lint models --dialect snowflake" @@ -204,7 +204,7 @@ lint-project: rules: - if: $CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH != 'main' script: - - pip install sqlfluff + - python -m pip install sqlfluff - sqlfluff lint models --dialect snowflake ``` @@ -235,7 +235,7 @@ pipelines: - step: name: Lint dbt project script: - - pip install sqlfluff==0.13.1 + - python -m pip install sqlfluff==0.13.1 - sqlfluff lint models --dialect snowflake --rules L019,L020,L021,L022 'main': # override if your default branch doesn't run on a branch named "main" diff --git a/website/docs/guides/sl-migration.md b/website/docs/guides/sl-migration.md index 0cfde742af2..8ede40a6a2d 100644 --- a/website/docs/guides/sl-migration.md +++ b/website/docs/guides/sl-migration.md @@ -25,10 +25,10 @@ dbt Labs recommends completing these steps in a local dev environment (such as t 1. Create new Semantic Model configs as YAML files in your dbt project.* 1. Upgrade the metrics configs in your project to the new spec.* 1. Delete your old metrics file or remove the `.yml` file extension so they're ignored at parse time. Remove the `dbt-metrics` package from your project. Remove any macros that reference `dbt-metrics`, like `metrics.calculate()`. Make sure that any packages you’re using don't have references to the old metrics spec. -1. Install the CLI with `pip install "dbt-metricflow[your_adapter_name]"`. For example: +1. Install the CLI with `python -m pip install "dbt-metricflow[your_adapter_name]"`. For example: ```bash - pip install "dbt-metricflow[snowflake]" + python -m pip install "dbt-metricflow[snowflake]" ``` **Note** - The MetricFlow CLI is not available in the IDE at this time. Support is coming soon. @@ -91,13 +91,11 @@ At this point, both the new semantic layer and the old semantic layer will be ru Now that your Semantic Layer is set up, you will need to update any downstream integrations that used the legacy Semantic Layer. -### Migration guide for Hex +### Migration guide for Hex -To learn more about integrating with Hex, check out their [documentation](https://learn.hex.tech/docs/connect-to-data/data-connections/dbt-integration#dbt-semantic-layer-integration) for more info. Additionally, refer to [dbt Semantic Layer cells](https://learn.hex.tech/docs/logic-cell-types/transform-cells/dbt-metrics-cells) to set up SQL cells in Hex. +To learn more about integrating with Hex, check out their [documentation](https://learn.hex.tech/docs/connect-to-data/data-connections/dbt-integration#dbt-semantic-layer-integration) for more info. Additionally, refer to [dbt Semantic Layer cells](https://learn.hex.tech/docs/logic-cell-types/transform-cells/dbt-metrics-cells) to set up SQL cells in Hex. -1. Set up a new connection for the Semantic Layer for your account. Something to note is that your old connection will still work. The following Loom video guides you in setting up your Semantic Layer with Hex: - - +1. Set up a new connection for the dbt Semantic Layer for your account. Something to note is that your legacy connection will still work. 2. Re-create the dashboards or reports that use the legacy dbt Semantic Layer. diff --git a/website/docs/reference/artifacts/run-results-json.md b/website/docs/reference/artifacts/run-results-json.md index dd92a9c4e53..5b3549db55b 100644 --- a/website/docs/reference/artifacts/run-results-json.md +++ b/website/docs/reference/artifacts/run-results-json.md @@ -3,7 +3,7 @@ title: "Run results JSON file" sidebar_label: "Run results" --- -**Current schema**: [`v4`](https://schemas.getdbt.com/dbt/run-results/v4/index.html) +**Current schema**: [`v5`](https://schemas.getdbt.com/dbt/run-results/v5/index.html) **Produced by:** [`build`](/reference/commands/build) diff --git a/website/docs/reference/commands/deps.md b/website/docs/reference/commands/deps.md index f4f8153c115..60ccd091ad7 100644 --- a/website/docs/reference/commands/deps.md +++ b/website/docs/reference/commands/deps.md @@ -60,28 +60,28 @@ Update your versions in packages.yml, then run dbt deps -dbt generates the `package-lock.yml` file in the _project_root_ where `packages.yml` is recorded, which contains all the resolved packages, the first time you run `dbt deps`. Each subsequent run records the packages installed in this file. If the subsequent `dbt deps` runs contain no updated packages in `depenedencies.yml` or `packages.yml`, dbt-core installs from `package-lock.yml`. +dbt generates the `package-lock.yml` file in the _project_root_ where `packages.yml` is recorded, which contains all the resolved packages, the first time you run `dbt deps`. Each subsequent run records the packages installed in this file. If the subsequent `dbt deps` runs contain no updated packages in `dependencies.yml` or `packages.yml`, dbt-core installs from `package-lock.yml`. When you update the package spec and run `dbt deps` again, the package-lock and package files update accordingly. You can run `dbt deps --lock` to update the `package-lock.yml` with the most recent dependencies from `packages`. -The `--add` flag allows you to add a package to the `packages.yml` with configurable `--version` and `--source` information. The `--dry-run` flag, when set to `False`(default), recompiles the `package-lock.yml` file after a new package is added to the `packages.yml` file. Set the flag to `True` for the changes to not persist. +The `--add-package` flag allows you to add a package to the `packages.yml` with configurable `--version` and `--source` information. The `--dry-run` flag, when set to `False`(default), recompiles the `package-lock.yml` file after a new package is added to the `packages.yml` file. Set the flag to `True` for the changes to not persist. -Examples of the `--add` flag: +Examples of the `--add-package` flag: ```shell # add package from hub (--source arg defaults to "hub") -dbt deps add --package dbt-labs/dbt_utils --version 1.0.0 +dbt deps --add-package dbt-labs/dbt_utils@1.0.0 -# add package from hub with semantic version -dbt deps add --package dbt-labs/snowplow --version ">=0.7.0,<0.8.0" +# add package from hub with semantic version range +dbt deps --add-package dbt-labs/snowplow@">=0.7.0,<0.8.0" # add package from git -dbt deps add --package https://github.com/fivetran/dbt_amplitude --version v0.3.0 --source git +dbt deps --add-package https://github.com/fivetran/dbt_amplitude@v0.3.0 --source git -# add package from local (--version not required for local) -dbt deps add --package /opt/dbt/redshift --source local +# add package from local +dbt deps --add-package /opt/dbt/redshift --source local -# add package to packages.yml WITHOUT updating package-lock.yml -dbt deps add --package dbt-labs/dbt_utils --version 1.0.0 --dry-run True +# add package to packages.yml and package-lock.yml WITHOUT actually installing dependencies +dbt deps --add-package dbt-labs/dbt_utils@1.0.0 --dry-run ``` - \ No newline at end of file + diff --git a/website/docs/reference/configs-and-properties.md b/website/docs/reference/configs-and-properties.md index 8a557c762ed..c6458babeaa 100644 --- a/website/docs/reference/configs-and-properties.md +++ b/website/docs/reference/configs-and-properties.md @@ -157,9 +157,9 @@ You can find an exhaustive list of each supported property and config, broken do * Model [properties](/reference/model-properties) and [configs](/reference/model-configs) * Source [properties](/reference/source-properties) and [configs](source-configs) * Seed [properties](/reference/seed-properties) and [configs](/reference/seed-configs) -* [Snapshot Properties](snapshot-properties) +* Snapshot [properties](snapshot-properties) * Analysis [properties](analysis-properties) -* [Macro Properties](/reference/macro-properties) +* Macro [properties](/reference/macro-properties) * Exposure [properties](/reference/exposure-properties) ## FAQs diff --git a/website/docs/reference/dbt-commands.md b/website/docs/reference/dbt-commands.md index d5f0bfcd2ad..4cb20051ea2 100644 --- a/website/docs/reference/dbt-commands.md +++ b/website/docs/reference/dbt-commands.md @@ -5,7 +5,7 @@ title: "dbt Command reference" You can run dbt using the following tools: - In your browser with the [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud) -- On the command line interface using the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation) or open-source [dbt Core](/docs/core/about-dbt-core), both of which enable you to execute dbt commands. The key distinction is the dbt Cloud CLI is tailored for dbt Cloud's infrastructure and integrates with all its [features](/docs/cloud/about-cloud/dbt-cloud-features). +- On the command line interface using the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation) or open-source [dbt Core](/docs/core/installation-overview), both of which enable you to execute dbt commands. The key distinction is the dbt Cloud CLI is tailored for dbt Cloud's infrastructure and integrates with all its [features](/docs/cloud/about-cloud/dbt-cloud-features). The following sections outline the commands supported by dbt and their relevant flags. For information about selecting models on the command line, consult the docs on [Model selection syntax](/reference/node-selection/syntax). @@ -71,7 +71,7 @@ Use the following dbt commands in the [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/ -Use the following dbt commands in [dbt Core](/docs/core/about-dbt-core) and use the `dbt` prefix. For example, to run the `test` command, type `dbt test`. +Use the following dbt commands in [dbt Core](/docs/core/installation-overview) and use the `dbt` prefix. For example, to run the `test` command, type `dbt test`. - [build](/reference/commands/build): build and test all selected resources (models, seeds, snapshots, tests) - [clean](/reference/commands/clean): deletes artifacts present in the dbt project diff --git a/website/docs/reference/dbt_project.yml.md b/website/docs/reference/dbt_project.yml.md index caf501c27ab..34af0f696c7 100644 --- a/website/docs/reference/dbt_project.yml.md +++ b/website/docs/reference/dbt_project.yml.md @@ -22,7 +22,85 @@ dbt uses YAML in a few different places. If you're new to YAML, it would be wort ::: - + + + + +```yml +[name](/reference/project-configs/name): string + +[config-version](/reference/project-configs/config-version): 2 +[version](/reference/project-configs/version): version + +[profile](/reference/project-configs/profile): profilename + +[model-paths](/reference/project-configs/model-paths): [directorypath] +[seed-paths](/reference/project-configs/seed-paths): [directorypath] +[test-paths](/reference/project-configs/test-paths): [directorypath] +[analysis-paths](/reference/project-configs/analysis-paths): [directorypath] +[macro-paths](/reference/project-configs/macro-paths): [directorypath] +[snapshot-paths](/reference/project-configs/snapshot-paths): [directorypath] +[docs-paths](/reference/project-configs/docs-paths): [directorypath] +[asset-paths](/reference/project-configs/asset-paths): [directorypath] + +[target-path](/reference/project-configs/target-path): directorypath +[log-path](/reference/project-configs/log-path): directorypath +[packages-install-path](/reference/project-configs/packages-install-path): directorypath + +[clean-targets](/reference/project-configs/clean-targets): [directorypath] + +[query-comment](/reference/project-configs/query-comment): string + +[require-dbt-version](/reference/project-configs/require-dbt-version): version-range | [version-range] + +[dbt-cloud](/docs/cloud/cloud-cli-installation): + [project-id](/docs/cloud/configure-cloud-cli#configure-the-dbt-cloud-cli): project_id # Required + [defer-env-id](/docs/cloud/about-cloud-develop-defer#defer-in-dbt-cloud-cli): environment_id # Optional + +[quoting](/reference/project-configs/quoting): + database: true | false + schema: true | false + identifier: true | false + +metrics: + + +models: + [](/reference/model-configs) + +seeds: + [](/reference/seed-configs) + +semantic-models: + + +snapshots: + [](/reference/snapshot-configs) + +sources: + [](source-configs) + +tests: + [](/reference/test-configs) + +vars: + [](/docs/build/project-variables) + +[on-run-start](/reference/project-configs/on-run-start-on-run-end): sql-statement | [sql-statement] +[on-run-end](/reference/project-configs/on-run-start-on-run-end): sql-statement | [sql-statement] + +[dispatch](/reference/project-configs/dispatch-config): + - macro_namespace: packagename + search_order: [packagename] + +[restrict-access](/docs/collaborate/govern/model-access): true | false + +``` + + + + + diff --git a/website/docs/reference/model-configs.md b/website/docs/reference/model-configs.md index 06830d0d32b..19391f1c763 100644 --- a/website/docs/reference/model-configs.md +++ b/website/docs/reference/model-configs.md @@ -1,8 +1,13 @@ --- title: Model configurations description: "Read this guide to understand model configurations in dbt." +meta: + resource_type: Models --- +import ConfigResource from '/snippets/_config-description-resource.md'; +import ConfigGeneral from '/snippets/_config-description-general.md'; + ## Related documentation * [Models](/docs/build/models) * [`run` command](/reference/commands/run) @@ -10,6 +15,8 @@ description: "Read this guide to understand model configurations in dbt." ## Available configurations ### Model-specific configurations + + + + +## Setting table properties +[Table properties](https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-tblproperties.html) can be set with your configuration for tables or views using `tblproperties`: + + + +```sql +{{ config( + tblproperties={ + 'delta.autoOptimize.optimizeWrite' : 'true', + 'delta.autoOptimize.autoCompact' : 'true' + } + ) }} +``` + + + +:::caution + +These properties are sent directly to Databricks without validation in dbt, so be thoughtful with how you use this feature. You will need to do a full refresh of incremental materializations if you change their `tblproperties`. + +::: + +One application of this feature is making `delta` tables compatible with `iceberg` readers using the [Universal Format](https://docs.databricks.com/en/delta/uniform.html). diff --git a/website/docs/reference/resource-configs/enabled.md b/website/docs/reference/resource-configs/enabled.md index d146f229494..552777c5c81 100644 --- a/website/docs/reference/resource-configs/enabled.md +++ b/website/docs/reference/resource-configs/enabled.md @@ -20,6 +20,17 @@ default_value: true }> + + +```yml +models: + [](/reference/resource-configs/resource-path): + +enabled: true | false + +``` + + + ```sql @@ -35,10 +46,15 @@ select ... + + + + + ```yml -models: +seeds: [](/reference/resource-configs/resource-path): +enabled: true | false @@ -48,13 +64,12 @@ models: - - + ```yml -seeds: +snapshots: [](/reference/resource-configs/resource-path): +enabled: true | false @@ -62,10 +77,6 @@ seeds: - - - - ```sql @@ -83,10 +94,14 @@ select ... + + + + ```yml -snapshots: +tests: [](/reference/resource-configs/resource-path): +enabled: true | false @@ -94,10 +109,6 @@ snapshots: - - - - ```sql @@ -125,17 +136,6 @@ select ... - - -```yml -tests: - [](/reference/resource-configs/resource-path): - +enabled: true | false - -``` - - - diff --git a/website/docs/reference/resource-configs/group.md b/website/docs/reference/resource-configs/group.md index bce2a72136e..a71935013c4 100644 --- a/website/docs/reference/resource-configs/group.md +++ b/website/docs/reference/resource-configs/group.md @@ -29,26 +29,29 @@ Support for grouping models was added in dbt Core v1.5 - + ```yml -version: 2 - models: - - name: model_name - group: finance + + [](resource-path): + +group: GROUP_NAME + ``` + - + ```yml +version: 2 + models: - [](resource-path): - +group: finance -``` + - name: MODEL_NAME + group: GROUP +``` @@ -57,7 +60,7 @@ models: ```sql {{ config( - group='finance' + group='GROUP_NAME' ) }} select ... @@ -85,7 +88,7 @@ Support for grouping seeds was added in dbt Core v1.5 ```yml models: [](resource-path): - +group: finance + +group: GROUP_NAME ``` @@ -94,8 +97,8 @@ models: ```yml seeds: - - name: [] - group: finance + - name: [SEED_NAME] + group: GROUP_NAME ``` @@ -120,7 +123,7 @@ Support for grouping snapshots was added in dbt Core v1.5 ```yml snapshots: [](resource-path): - +group: finance + +group: GROUP_NAME ``` @@ -131,7 +134,7 @@ snapshots: {% snapshot [snapshot_name](snapshot_name) %} {{ config( - group='finance' + group='GROUP_NAME' ) }} select ... @@ -161,7 +164,7 @@ Support for grouping tests was added in dbt Core v1.5 ```yml tests: [](resource-path): - +group: finance + +group: GROUP_NAME ``` @@ -176,7 +179,7 @@ version: 2 tests: - : config: - group: finance + group: GROUP_NAME ``` @@ -187,7 +190,7 @@ version: 2 {% test () %} {{ config( - group='finance' + group='GROUP_NAME' ) }} select ... @@ -202,7 +205,7 @@ select ... ```sql {{ config( - group='finance' + group='GROUP_NAME' ) }} ``` @@ -220,8 +223,8 @@ select ... version: 2 analyses: - - name: - group: finance + - name: ANALYSIS_NAME + group: GROUP_NAME ``` @@ -244,7 +247,7 @@ Support for grouping metrics was added in dbt Core v1.5 ```yaml metrics: [](resource-path): - [+](plus-prefix)group: finance + [+](plus-prefix)group: GROUP_NAME ``` @@ -255,8 +258,8 @@ metrics: version: 2 metrics: - - name: [] - group: finance + - name: [METRIC_NAME] + group: GROUP_NAME ``` @@ -277,23 +280,27 @@ Support for grouping semantic models has been added in dbt Core v1.7. - + ```yaml -semantic_models: - - name: - group: + +semantic-models: + [](resource-path): + [+](plus-prefix)group: GROUP_NAME ``` - + ```yaml -semantic-models: - [](resource-path): - [+](plus-prefix)group: + +semantic_models: + - name: SEMANTIC_MODEL_NAME + group: GROUP_NAME + + ``` diff --git a/website/docs/reference/resource-configs/meta.md b/website/docs/reference/resource-configs/meta.md index 9ccf2cc60dc..bc0c0c7c041 100644 --- a/website/docs/reference/resource-configs/meta.md +++ b/website/docs/reference/resource-configs/meta.md @@ -277,8 +277,8 @@ seeds: select 1 as id ``` - - +
+ ### Assign owner in the dbt_project.yml as a config property diff --git a/website/docs/reference/resource-configs/target_schema.md b/website/docs/reference/resource-configs/target_schema.md index 041f004e20c..9d459b32bad 100644 --- a/website/docs/reference/resource-configs/target_schema.md +++ b/website/docs/reference/resource-configs/target_schema.md @@ -74,7 +74,7 @@ Notes: * Consider whether this use-case is right for you, as downstream `refs` will select from the `dev` version of a snapshot, which can make it hard to validate models that depend on snapshots (see above [FAQ](#faqs)) - + ```sql {{ diff --git a/website/docs/reference/resource-properties/config.md b/website/docs/reference/resource-properties/config.md index e6021def852..55d2f64d9ff 100644 --- a/website/docs/reference/resource-properties/config.md +++ b/website/docs/reference/resource-properties/config.md @@ -16,6 +16,7 @@ datatype: "{dictionary}" { label: 'Sources', value: 'sources', }, { label: 'Metrics', value: 'metrics', }, { label: 'Exposures', value: 'exposures', }, + { label: 'Semantic models', value: 'semantic models', }, ] }> @@ -182,6 +183,36 @@ exposures:
+ + + + +Support for the `config` property on `semantic_models` was added in dbt Core v1.7 + + + + + + + +```yml +version: 2 + +semantic_models: + - name: + config: + enabled: true | false + group: + meta: {dictionary} +``` + + + + + + +
+## Definition The `config` property allows you to configure resources at the same time you're defining properties in YAML files. diff --git a/website/docs/reference/resource-properties/latest_version.md b/website/docs/reference/resource-properties/latest_version.md index 4c531879598..567ea5e7e1f 100644 --- a/website/docs/reference/resource-properties/latest_version.md +++ b/website/docs/reference/resource-properties/latest_version.md @@ -25,6 +25,8 @@ The latest version of this model. The "latest" version is relevant for: This value can be a string or a numeric (integer or float) value. It must be one of the [version identifiers](/reference/resource-properties/versions#v) specified in this model's list of `versions`. +To run the latest version of a model, you can use the [`--select` flag](/reference/node-selection/syntax). Refer to [Model versions](/docs/collaborate/govern/model-versions#run-a-model-with-multiple-versions) for more information and syntax. + ## Default If not specified for a versioned model, `latest_version` defaults to the largest [version identifier](/reference/resource-properties/versions#v): numerically greatest (if all version identifiers are numeric), otherwise the alphabetically last (if they are strings). diff --git a/website/docs/reference/resource-properties/versions.md b/website/docs/reference/resource-properties/versions.md index 86e9abf34a8..f6b71852aef 100644 --- a/website/docs/reference/resource-properties/versions.md +++ b/website/docs/reference/resource-properties/versions.md @@ -43,6 +43,9 @@ The value of the version identifier is used to order versions of a model relativ In general, we recommend that you use a simple "major versioning" scheme for your models: `1`, `2`, `3`, and so on, where each version reflects a breaking change from previous versions. You are able to use other versioning schemes. dbt will sort your version identifiers alphabetically if the values are not all numeric. You should **not** include the letter `v` in the version identifier, as dbt will do that for you. +To run a model with multiple versions, you can use the [`--select` flag](/reference/node-selection/syntax). Refer to [Model versions](/docs/collaborate/govern/model-versions#run-a-model-with-multiple-versions) for more information and syntax. + + ### `defined_in` The name of the model file (excluding the file extension, e.g. `.sql` or `.py`) where the model version is defined. diff --git a/website/docs/reference/seed-configs.md b/website/docs/reference/seed-configs.md index 429aa9444ae..dd733795eef 100644 --- a/website/docs/reference/seed-configs.md +++ b/website/docs/reference/seed-configs.md @@ -1,11 +1,19 @@ --- title: Seed configurations description: "Read this guide to learn about using seed configurations in dbt." +meta: + resource_type: Seeds --- +import ConfigResource from '/snippets/_config-description-resource.md'; +import ConfigGeneral from '/snippets/_config-description-general.md'; + + ## Available configurations ### Seed-specific configurations + + + + + + + + Google BigQuery - ❌ + ✅ ❌ diff --git a/website/docusaurus.config.js b/website/docusaurus.config.js index ee593e568f4..b4b758e7744 100644 --- a/website/docusaurus.config.js +++ b/website/docusaurus.config.js @@ -1,6 +1,7 @@ const path = require("path"); const math = require("remark-math"); const katex = require("rehype-katex"); + const { versions, versionedPages, versionedCategories } = require("./dbt-versions"); require("dotenv").config(); @@ -48,7 +49,7 @@ var siteSettings = { onBrokenMarkdownLinks: "throw", trailingSlash: false, themeConfig: { - docs:{ + docs: { sidebar: { hideable: true, autoCollapseCategories: true, @@ -70,17 +71,17 @@ var siteSettings = { }, announcementBar: { id: "biweekly-demos", - content: - "Join our weekly demos and dbt Cloud in action!", + content: "Join our weekly demos and see dbt Cloud in action!", backgroundColor: "#047377", textColor: "#fff", isCloseable: true, }, announcementBarActive: true, - announcementBarLink: "https://www.getdbt.com/resources/dbt-cloud-demos-with-experts?utm_source=docs&utm_medium=event&utm_campaign=q1-2024_cloud-demos-with-experts_awareness", + announcementBarLink: + "https://www.getdbt.com/resources/dbt-cloud-demos-with-experts?utm_source=docs&utm_medium=event&utm_campaign=q1-2024_cloud-demos-with-experts_awareness", // Set community spotlight member on homepage // This is the ID for a specific file under docs/community/spotlight - communitySpotlightMember: "faith-lierheimer", + communitySpotlightMember: "alison-stanton", prism: { theme: (() => { var theme = require("prism-react-renderer/themes/nightOwl"); @@ -126,12 +127,12 @@ var siteSettings = { position: "right", items: [ { - label: 'Courses', - href: 'https://courses.getdbt.com', + label: "Courses", + href: "https://courses.getdbt.com", }, { - label: 'Best Practices', - to: '/best-practices', + label: "Best Practices", + to: "/best-practices", }, { label: "Guides", @@ -144,7 +145,7 @@ var siteSettings = { { label: "Glossary", to: "/glossary", - } + }, ], }, { @@ -193,9 +194,10 @@ var siteSettings = { `, }, @@ -229,7 +231,8 @@ var siteSettings = { }, blog: { blogTitle: "Developer Blog | dbt Developer Hub", - blogDescription: "Find tutorials, product updates, and developer insights in the dbt Developer Blog.", + blogDescription: + "Find tutorials, product updates, and developer insights in the dbt Developer Blog.", postsPerPage: 20, blogSidebarTitle: "Recent posts", blogSidebarCount: 5, @@ -243,7 +246,10 @@ var siteSettings = { [path.resolve("plugins/insertMetaTags"), { metatags }], path.resolve("plugins/svg"), path.resolve("plugins/customWebpackConfig"), - [path.resolve("plugins/buildGlobalData"), { versionedPages, versionedCategories }], + [ + path.resolve("plugins/buildGlobalData"), + { versionedPages, versionedCategories }, + ], path.resolve("plugins/buildAuthorPages"), path.resolve("plugins/buildSpotlightIndexPage"), path.resolve("plugins/buildQuickstartIndexPage"), @@ -258,9 +264,10 @@ var siteSettings = { src: "https://cdn.jsdelivr.net/npm/featherlight@1.7.14/release/featherlight.min.js", defer: true, }, + "https://cdn.jsdelivr.net/npm/clipboard@2.0.11/dist/clipboard.min.js", + "/js/headerLinkCopy.js", "/js/gtm.js", - "/js/onetrust.js", - "https://kit.fontawesome.com/7110474d41.js", + "/js/onetrust.js" ], stylesheets: [ "/css/fonts.css", @@ -276,8 +283,8 @@ var siteSettings = { "sha384-odtC+0UGzzFL/6PNoE8rX/SPcQDXBJ+uRepguP4QkPCm2LBxH3FA3y+fKSiJ+AmM", crossorigin: "anonymous", }, - {rel: 'icon', href: '/img/favicon.png', type: 'image/png'}, - {rel: 'icon', href: '/img/favicon.svg', type: 'image/svg+xml'}, + { rel: "icon", href: "/img/favicon.png", type: "image/png" }, + { rel: "icon", href: "/img/favicon.svg", type: "image/svg+xml" }, ], }; diff --git a/website/sidebars.js b/website/sidebars.js index 66ba731fb1b..473dfe85e04 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -1,6 +1,11 @@ const sidebarSettings = { docs: [ "docs/introduction", + { + type: "link", + label: "Guides", + href: `/guides`, + }, { type: "category", label: "Supported data platforms", @@ -27,12 +32,7 @@ const sidebarSettings = { "docs/cloud/about-cloud/browsers", ], }, // About dbt Cloud directory - { - type: "link", - label: "Guides", - href: `/guides`, - }, - { + { type: "category", label: "Set up dbt", collapsed: true, @@ -54,6 +54,7 @@ const sidebarSettings = { link: { type: "doc", id: "docs/cloud/connect-data-platform/about-connections" }, items: [ "docs/cloud/connect-data-platform/about-connections", + "docs/cloud/connect-data-platform/connect-microsoft-fabric", "docs/cloud/connect-data-platform/connect-starburst-trino", "docs/cloud/connect-data-platform/connect-snowflake", "docs/cloud/connect-data-platform/connect-bigquery", @@ -68,6 +69,7 @@ const sidebarSettings = { link: { type: "doc", id: "docs/cloud/manage-access/about-user-access" }, items: [ "docs/cloud/manage-access/about-user-access", + "docs/cloud/manage-access/invite-users", { type: "category", label: "User permissions and licenses", @@ -120,35 +122,6 @@ const sidebarSettings = { }, ], }, // Supported Git providers - { - type: "category", - label: "Develop in dbt Cloud", - link: { type: "doc", id: "docs/cloud/about-cloud-develop" }, - items: [ - "docs/cloud/about-cloud-develop", - "docs/cloud/about-cloud-develop-defer", - { - type: "category", - label: "dbt Cloud CLI", - link: { type: "doc", id: "docs/cloud/cloud-cli-installation" }, - items: [ - "docs/cloud/cloud-cli-installation", - "docs/cloud/configure-cloud-cli", - ], - }, - { - type: "category", - label: "dbt Cloud IDE", - link: { type: "doc", id: "docs/cloud/dbt-cloud-ide/develop-in-the-cloud" }, - items: [ - "docs/cloud/dbt-cloud-ide/develop-in-the-cloud", - "docs/cloud/dbt-cloud-ide/ide-user-interface", - "docs/cloud/dbt-cloud-ide/lint-format", - "docs/cloud/dbt-cloud-ide/dbt-cloud-tips", - ], - }, - ], - }, // dbt Cloud develop directory { type: "category", label: "Secure your tenant", @@ -174,14 +147,13 @@ const sidebarSettings = { link: { type: "doc", id: "docs/core/about-core-setup" }, items: [ "docs/core/about-core-setup", - "docs/core/about-dbt-core", "docs/core/dbt-core-environments", { type: "category", - label: "Install dbt", - link: { type: "doc", id: "docs/core/installation" }, + label: "Install dbt Core", + link: { type: "doc", id: "docs/core/installation-overview", }, items: [ - "docs/core/installation", + "docs/core/installation-overview", "docs/core/homebrew-install", "docs/core/pip-install", "docs/core/docker-install", @@ -248,6 +220,37 @@ const sidebarSettings = { "docs/running-a-dbt-project/using-threads", ], }, + { + type: "category", + label: "Develop with dbt Cloud", + collapsed: true, + link: { type: "doc", id: "docs/cloud/about-develop-dbt" }, + items: [ + "docs/cloud/about-develop-dbt", + "docs/cloud/about-cloud-develop-defer", + { + type: "category", + label: "dbt Cloud CLI", + collapsed: true, + link: { type: "doc", id: "docs/cloud/cloud-cli-installation" }, + items: [ + "docs/cloud/cloud-cli-installation", + "docs/cloud/configure-cloud-cli", + ], + }, + { + type: "category", + label: "dbt Cloud IDE", + link: { type: "doc", id: "docs/cloud/dbt-cloud-ide/develop-in-the-cloud" }, + items: [ + "docs/cloud/dbt-cloud-ide/develop-in-the-cloud", + "docs/cloud/dbt-cloud-ide/ide-user-interface", + "docs/cloud/dbt-cloud-ide/lint-format", + "docs/cloud/dbt-cloud-ide/dbt-cloud-tips", + ], + }, + ], + }, { type: "category", label: "Build dbt projects", @@ -414,7 +417,15 @@ const sidebarSettings = { link: { type: "doc", id: "docs/collaborate/collaborate-with-others" }, items: [ "docs/collaborate/collaborate-with-others", - "docs/collaborate/explore-projects", + { + type: "category", + label: "Explore dbt projects", + link: { type: "doc", id: "docs/collaborate/explore-projects" }, + items: [ + "docs/collaborate/explore-projects", + "docs/collaborate/explore-multiple-projects", + ], + }, { type: "category", label: "Git version control", @@ -955,11 +966,11 @@ const sidebarSettings = { type: "category", label: "Database Permissions", items: [ - "reference/database-permissions/about-database-permissions", + "reference/database-permissions/about-database-permissions", "reference/database-permissions/databricks-permissions", "reference/database-permissions/postgres-permissions", - "reference/database-permissions/redshift-permissions", - "reference/database-permissions/snowflake-permissions", + "reference/database-permissions/redshift-permissions", + "reference/database-permissions/snowflake-permissions", ], }, ], @@ -1049,6 +1060,7 @@ const sidebarSettings = { "best-practices/materializations/materializations-guide-7-conclusion", ], }, + "best-practices/clone-incremental-models", "best-practices/writing-custom-generic-tests", "best-practices/best-practice-workflows", "best-practices/dbt-unity-catalog-best-practices", diff --git a/website/snippets/_adapters-trusted.md b/website/snippets/_adapters-trusted.md index 7747ce16dec..20984253c32 100644 --- a/website/snippets/_adapters-trusted.md +++ b/website/snippets/_adapters-trusted.md @@ -1,18 +1,23 @@
+ + diff --git a/website/snippets/_adapters-verified.md b/website/snippets/_adapters-verified.md index 3cc1e800448..c3607b50125 100644 --- a/website/snippets/_adapters-verified.md +++ b/website/snippets/_adapters-verified.md @@ -15,7 +15,7 @@ icon="databricks"/> @@ -45,15 +45,15 @@ icon="starburst"/> + title="Microsoft Fabric" + body="Set up in dbt Cloud
Install with dbt Core

" + icon="fabric"/> diff --git a/website/snippets/_cloud-environments-info.md b/website/snippets/_cloud-environments-info.md index 2488e1d6c17..4e1cba64e00 100644 --- a/website/snippets/_cloud-environments-info.md +++ b/website/snippets/_cloud-environments-info.md @@ -34,6 +34,24 @@ Both development and deployment environments have a section called **General Set - If you select a current version with `(latest)` in the name, your environment will automatically install the latest stable version of the minor version selected. ::: +### Git repository caching + +At the start of every job run, dbt Cloud clones the project's Git repository so it has the latest versions of your project's code and runs `dbt deps` to install your dependencies. + +For improved reliability and performance on your job runs, you can enable dbt Cloud to keep a cache of the project's Git repository. So, if there's a third-party outage that causes the cloning operation to fail, dbt Cloud will instead use the cached copy of the repo so your jobs can continue running as scheduled. + +dbt Cloud caches your project's Git repo after each successful run and retains it for 8 days if there are no repo updates. It caches all packages regardless of installation method and does not fetch code outside of the job runs. + +To enable Git repository caching, select **Account settings** from the gear menu and enable the **Repository caching** option. + + + +:::note + +This feature is only available on the dbt Cloud Enterprise plan. + +::: + ### Custom branch behavior By default, all environments will use the default branch in your repository (usually the `main` branch) when accessing your dbt code. This is overridable within each dbt Cloud Environment using the **Default to a custom branch** option. This setting have will have slightly different behavior depending on the environment type: @@ -44,11 +62,7 @@ By default, all environments will use the default branch in your repository (usu For more info, check out this [FAQ page on this topic](/faqs/Environments/custom-branch-settings)! -### Extended attributes (Beta) - -:::important This feature is currently in beta -Extended Attributes is currently in [beta](/docs/dbt-versions/product-lifecycles?) for select users and is subject to change. -::: +### Extended attributes :::note Extended attributes are retrieved and applied only at runtime when `profiles.yml` is requested for a specific Cloud run. Extended attributes are currently _not_ taken into consideration for Cloud-specific features such as PrivateLink or SSH Tunneling that do not rely on `profiles.yml` values. diff --git a/website/snippets/_config-description-general.md b/website/snippets/_config-description-general.md new file mode 100644 index 00000000000..ef30901be7b --- /dev/null +++ b/website/snippets/_config-description-general.md @@ -0,0 +1 @@ +General configurations provide broader operational settings applicable across multiple resource types. Like resource-specific configurations, these can also be set in the project file, property files, or within resource-specific files. diff --git a/website/snippets/_config-description-resource.md b/website/snippets/_config-description-resource.md new file mode 100644 index 00000000000..a0910631338 --- /dev/null +++ b/website/snippets/_config-description-resource.md @@ -0,0 +1,3 @@ +Resource-specific configurations are applicable to only one dbt resource type rather than multiple resource types. You can define these settings in the project file (`dbt_project.yml`), a property file (`models/properties.yml` for models, similarly for other resources), or within the resource’s file using the `{{ config() }}` macro.
+ +The following resource-specific configurations are only available to {props.meta.resource_type}: diff --git a/website/snippets/_microsoft-adapters-soon.md b/website/snippets/_microsoft-adapters-soon.md index c3f30ef0939..927c9d2e5ca 100644 --- a/website/snippets/_microsoft-adapters-soon.md +++ b/website/snippets/_microsoft-adapters-soon.md @@ -1,3 +1,3 @@ :::tip Coming soon -dbt Cloud support for the Microsoft Fabric and Azure Synapse Analytics adapters is coming soon! +dbt Cloud support for the Azure Synapse Analytics adapter is coming soon! ::: \ No newline at end of file diff --git a/website/snippets/_packages_or_dependencies.md b/website/snippets/_packages_or_dependencies.md new file mode 100644 index 00000000000..5cc4c67e63c --- /dev/null +++ b/website/snippets/_packages_or_dependencies.md @@ -0,0 +1,34 @@ + +## Use cases + +Starting from dbt v1.6, `dependencies.yml` has replaced `packages.yml`. The `dependencies.yml` file can now contain both types of dependencies: "package" and "project" dependencies. +- ["Package" dependencies](/docs/build/packages) lets you add source code from someone else's dbt project into your own, like a library. +- ["Project" dependencies](/docs/collaborate/govern/project-dependencies) provide a different way to build on top of someone else's work in dbt. + +If your dbt project doesn't require the use of Jinja within the package specifications, you can simply rename your existing `packages.yml` to `dependencies.yml`. However, something to note is if your project's package specifications use Jinja, particularly for scenarios like adding an environment variable or a [Git token method](/docs/build/packages#git-token-method) in a private Git package specification, you should continue using the `packages.yml` file name. + +There are some important differences between Package dependencies and Project dependencies: + + + + +Project dependencies are designed for the [dbt Mesh](/best-practices/how-we-mesh/mesh-1-intro) and [cross-project reference](/docs/collaborate/govern/project-dependencies#how-to-use-ref) workflow: + +- Use `dependencies.yml` when you need to set up cross-project references between different dbt projects, especially in a dbt Mesh setup. +- Use `dependencies.yml` when you want to include both projects and non-private dbt packages in your project's dependencies. + - Private packages are not supported in `dependencies.yml` because they intentionally don't support Jinja rendering or conditional configuration. This is to maintain static and predictable configuration and ensures compatibility with other services, like dbt Cloud. +- Use `dependencies.yml` for organization and maintainability. It can help maintain your project's organization by allowing you to specify [dbt Hub packages](https://hub.getdbt.com/) like `dbt_utils`. This reduces the need for multiple YAML files to manage dependencies. + + + + + +Package dependencies allow you to add source code from someone else's dbt project into your own, like a library: + +- Use `packages.yml` when you want to download dbt packages, such as dbt projects, into your root or parent dbt project. Something to note is that it doesn't contribute to the dbt Mesh workflow. +- Use `packages.yml` to include packages, including private packages, in your project's dependencies. If you have private packages that you need to reference, `packages.yml` is the way to go. +- `packages.yml` supports Jinja rendering for historical reasons, allowing dynamic configurations. This can be useful if you need to insert values, like a [Git token method](/docs/build/packages#git-token-method) from an environment variable, into your package specifications. + +Currently, to use private git repositories in dbt, you need to use a workaround that involves embedding a git token with Jinja. This is not ideal as it requires extra steps like creating a user and sharing a git token. We're planning to introduce a simpler method soon that won't require Jinja-embedded secret environment variables. For that reason, `dependencies.yml` does not support Jinja. + + diff --git a/website/snippets/_setup-pages-intro.md b/website/snippets/_setup-pages-intro.md new file mode 100644 index 00000000000..5ded5ba5ebc --- /dev/null +++ b/website/snippets/_setup-pages-intro.md @@ -0,0 +1,21 @@ + +
    +
  • Maintained by: {props.meta.maintained_by}
  • +
  • Authors: {props.meta.authors}
  • +
  • GitHub repo: {props.meta.github_repo}
  • +
  • PyPI package: {props.meta.pypi_package}
  • +
  • Slack channel: {props.meta.slack_channel_name}
  • +
  • Supported dbt Core version: {props.meta.min_core_version} and newer
  • +
  • dbt Cloud support: {props.meta.cloud_support}
  • +
  • Minimum data platform version: {props.meta.min_supported_version}
  • +
+ +

Installing {props.meta.pypi_package}

+ +Use `pip` to install the adapter, which automatically installs `dbt-core` and any additional dependencies. Use the following command for installation: +python -m pip install {props.meta.pypi_package} + +

Configuring {props.meta.pypi_package}

+ +

For {props.meta.platform_name}-specific configuration, please refer to {props.meta.platform_name} configs.

+ diff --git a/website/snippets/_sl-measures-parameters.md b/website/snippets/_sl-measures-parameters.md new file mode 100644 index 00000000000..4bd32311fda --- /dev/null +++ b/website/snippets/_sl-measures-parameters.md @@ -0,0 +1,12 @@ +| Parameter | Description | +| --- | --- | --- | +| [`name`](/docs/build/measures#name) | Provide a name for the measure, which must be unique and can't be repeated across all semantic models in your dbt project. | Required | +| [`description`](/docs/build/measures#description) | Describes the calculated measure. | Optional | +| [`agg`](/docs/build/measures#description) | dbt supports the following aggregations: `sum`, `max`, `min`, `count_distinct`, and `sum_boolean`. | Required | +| [`expr`](/docs/build/measures#expr) | Either reference an existing column in the table or use a SQL expression to create or derive a new one. | Optional | +| [`non_additive_dimension`](/docs/build/measures#non-additive-dimensions) | Non-additive dimensions can be specified for measures that cannot be aggregated over certain dimensions, such as bank account balances, to avoid producing incorrect results. | Optional | +| `agg_params` | Specific aggregation properties such as a percentile. | Optional | +| `agg_time_dimension` | The time field. Defaults to the default agg time dimension for the semantic model. | Optional | 1.6 and higher | +| `label`* | How the metric appears in project docs and downstream integrations. | Required | +| `create_metric`* | You can create a metric directly from a measure with `create_metric: True` and specify its display name with `create_metric_display_name`. | Optional | +*Available on dbt version 1.7 or higher. diff --git a/website/snippets/_sl-partner-links.md b/website/snippets/_sl-partner-links.md index c97c682171b..2ad49b94e95 100644 --- a/website/snippets/_sl-partner-links.md +++ b/website/snippets/_sl-partner-links.md @@ -26,7 +26,7 @@ The following tools integrate with the dbt Semantic Layer: className="external-link" target="_blank" rel="noopener noreferrer"> - +
@@ -40,7 +40,7 @@ The following tools integrate with the dbt Semantic Layer: className="external-link" target="_blank" rel="noopener noreferrer"> - + @@ -54,7 +54,7 @@ The following tools integrate with the dbt Semantic Layer: className="external-link" target="_blank" rel="noopener noreferrer"> - + @@ -68,7 +68,7 @@ The following tools integrate with the dbt Semantic Layer: className="external-link" target="_blank" rel="noopener noreferrer"> - + @@ -82,7 +82,7 @@ The following tools integrate with the dbt Semantic Layer: className="external-link" target="_blank" rel="noopener noreferrer"> - + @@ -96,7 +96,7 @@ The following tools integrate with the dbt Semantic Layer: className="external-link" target="_blank" rel="noopener noreferrer"> - + diff --git a/website/snippets/_sl-test-and-query-metrics.md b/website/snippets/_sl-test-and-query-metrics.md index 43ebd929cb3..2e9490f089d 100644 --- a/website/snippets/_sl-test-and-query-metrics.md +++ b/website/snippets/_sl-test-and-query-metrics.md @@ -48,8 +48,8 @@ The dbt Cloud CLI is strongly recommended to define and query metrics for your d 1. Install [MetricFlow](/docs/build/metricflow-commands) as an extension of a dbt adapter from PyPI. 2. Create or activate your virtual environment with `python -m venv venv` or `source your-venv/bin/activate`. -3. Run `pip install dbt-metricflow`. - - You can install MetricFlow using PyPI as an extension of your dbt adapter in the command line. To install the adapter, run `pip install "dbt-metricflow[your_adapter_name]"` and add the adapter name at the end of the command. As an example for a Snowflake adapter, run `pip install "dbt-metricflow[snowflake]"`. +3. Run `python -m pip install dbt-metricflow`. + - You can install MetricFlow using PyPI as an extension of your dbt adapter in the command line. To install the adapter, run `python -m pip install "dbt-metricflow[your_adapter_name]"` and add the adapter name at the end of the command. As an example for a Snowflake adapter, run `python -m pip install "dbt-metricflow[snowflake]"`. - You'll need to manage versioning between dbt Core, your adapter, and MetricFlow. 4. Run `dbt parse`. This allows MetricFlow to build a semantic graph and generate a `semantic_manifest.json`. - This creates the file in your `/target` directory. If you're working from the Jaffle shop example, run `dbt seed && dbt run` before proceeding to ensure the data exists in your warehouse. diff --git a/website/snippets/core-version-support.md b/website/snippets/core-version-support.md index ff9fa94ff8c..4ec976d4df6 100644 --- a/website/snippets/core-version-support.md +++ b/website/snippets/core-version-support.md @@ -2,4 +2,4 @@ - **[Active](/docs/dbt-versions/core#ongoing-patches)** — We will patch regressions, new bugs, and include fixes for older bugs / quality-of-life improvements. We implement these changes when we have high confidence that they're narrowly scoped and won't cause unintended side effects. - **[Critical](/docs/dbt-versions/core#ongoing-patches)** — Newer minor versions transition the previous minor version into "Critical Support" with limited "security" releases for critical security and installation fixes. - **[End of Life](/docs/dbt-versions/core#eol-version-support)** — Minor versions that have reached EOL no longer receive new patch releases. -- **Deprecated** — dbt-core versions older than v1.0 are no longer maintained by dbt Labs, nor supported in dbt Cloud. +- **Deprecated** — dbt Core versions older than v1.0 are no longer maintained by dbt Labs, nor supported in dbt Cloud. diff --git a/website/snippets/core-versions-table.md b/website/snippets/core-versions-table.md index 71e11974a56..fc7b054bc0a 100644 --- a/website/snippets/core-versions-table.md +++ b/website/snippets/core-versions-table.md @@ -1,22 +1,15 @@ -### Latest Releases +### Latest releases -| dbt Core | Initial Release | Support Level | Critical Support Until | -|------------------------------------------------------------|-----------------|----------------|-------------------------| -| [**v1.7**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.7) | Nov 2, 2023 | Active | Nov 1, 2024 | -| [**v1.6**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.6) | Jul 31, 2023 | Critical | Jul 30, 2024 | -| [**v1.5**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.5) | Apr 27, 2023 | Critical | Apr 27, 2024 | -| [**v1.4**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.4) | Jan 25, 2023 | Critical | Jan 25, 2024 | -| [**v1.3**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.3) | Oct 12, 2022 | End of Life* ⚠️ | Oct 12, 2023 | -| [**v1.2**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.2) | Jul 26, 2022 | End of Life* ⚠️ | Jul 26, 2023 | -| [**v1.1**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.1) ⚠️ | Apr 28, 2022 | Deprecated ⛔️ | Deprecated ⛔️ | -| [**v1.0**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.0) ⚠️ | Dec 3, 2021 | Deprecated ⛔️ | Deprecated ⛔️ | +| dbt Core | Initial release | Support level and end date | +|:----------------------------------------------------:|:---------------:|:-------------------------------------:| +| [**v1.7**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.7) | Nov 2, 2023 | Active — Nov 1, 2024 | +| [**v1.6**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.6) | Jul 31, 2023 | Critical — Jul 30, 2024 | +| [**v1.5**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.5) | Apr 27, 2023 | Critical — Apr 27, 2024 | +| [**v1.4**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.4) | Jan 25, 2023 | Critical — Jan 25, 2024 | +| [**v1.3**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.3) | Oct 12, 2022 | End of Life* ⚠️ | +| [**v1.2**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.2) | Jul 26, 2022 | End of Life* ⚠️ | +| [**v1.1**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.1) | Apr 28, 2022 | End of Life* ⚠️ | +| [**v1.0**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.0) | Dec 3, 2021 | End of Life* ⚠️ | | **v0.X** ⛔️ | (Various dates) | Deprecated ⛔️ | Deprecated ⛔️ | _*All versions of dbt Core since v1.0 are available in dbt Cloud until further notice. Versions that are EOL do not receive any fixes. For the best support, we recommend upgrading to a version released within the past 12 months._ -### Planned future releases -_Future release dates are tentative and subject to change._ - -| dbt Core | Planned Release | Critical & dbt Cloud Support Until | -|----------|-----------------|-------------------------------------| -| **v1.8** | _Jan 2024_ | _Jan 2025_ | -| **v1.9** | _Apr 2024_ | _Apr 2025_ | diff --git a/website/src/components/author/index.js b/website/src/components/author/index.js index 6b49295936d..6bbe786a5ac 100644 --- a/website/src/components/author/index.js +++ b/website/src/components/author/index.js @@ -5,6 +5,7 @@ import useDocusaurusContext from '@docusaurus/useDocusaurusContext'; import BlogLayout from '@theme/BlogLayout'; import getAllPosts from '../../utils/get-all-posts'; import imageCacheWrapper from '../../../functions/image-cache-wrapper'; +import getSvgIcon from '../../utils/get-svg-icon'; function Author(props) { const { authorData } = props @@ -28,49 +29,63 @@ function Author(props) { - - {description && + + {description && ( - } + )} -
+
- {name} + {name}

{name}

- {job_title && job_title} {organization && `@ ${organization}`} + {job_title && job_title} {organization && `@ ${organization}`}
- {links && links.length > 0 && ( - <> - | - {links.map((link, i) => ( - - - - ))} - - ) - } -
+ {links && links.length > 0 && ( + <> + | + {links.map((link, i) => ( + + {/* */} + {link?.icon ? ( +
+ {getSvgIcon(link?.icon)} +
+ ) : null} +
+ ))} + + )} +

-

{description ? description : ''}

+

{description ? description : ""}

- {authorPosts && authorPosts.length > 0 && - - } + {authorPosts && authorPosts.length > 0 && ( + + )}
); @@ -98,7 +113,5 @@ function AuthorPosts({posts}) { ) } - - export default Author; diff --git a/website/src/components/communitySpotlightList/index.js b/website/src/components/communitySpotlightList/index.js index b72d640b74d..6885f5ff2ac 100644 --- a/website/src/components/communitySpotlightList/index.js +++ b/website/src/components/communitySpotlightList/index.js @@ -11,7 +11,7 @@ const communityDescription = "The dbt Community is where analytics engineering l // This date determines where the 'Previously on the Spotlight" text will show. // Any spotlight members with a 'dateCreated' field before this date // will be under the 'Previously..' header. -const currentSpotlightDate = new Date('2023-06-01') +const currentSpotlightDate = new Date('2023-10-31') function CommunitySpotlightList({ spotlightData }) { const { siteConfig } = useDocusaurusContext() diff --git a/website/src/components/icon/index.js b/website/src/components/icon/index.js new file mode 100644 index 00000000000..ab294758c0b --- /dev/null +++ b/website/src/components/icon/index.js @@ -0,0 +1,7 @@ +import getSvgIcon from "../../utils/get-svg-icon" + +function Icon({ name }) { + return getSvgIcon(name) +} + +export default Icon diff --git a/website/src/components/lifeCycle/index.js b/website/src/components/lifeCycle/index.js new file mode 100644 index 00000000000..ca08783a017 --- /dev/null +++ b/website/src/components/lifeCycle/index.js @@ -0,0 +1,11 @@ +import React from 'react' +import styles from './styles.module.css'; + +export default function Lifecycle(props) { + if (!props.status) { + return null; + } + return ( + {props.status} + ); +} diff --git a/website/src/components/lifeCycle/styles.module.css b/website/src/components/lifeCycle/styles.module.css new file mode 100644 index 00000000000..ca145294127 --- /dev/null +++ b/website/src/components/lifeCycle/styles.module.css @@ -0,0 +1,13 @@ +.lifecycle { + background-color: #047377; /* Teal background to align with dbt Labs' color */ + color: #fff; /* Light text color for contrast */ + font-size: 0.78rem; /* Using rem so font size is relative to root */ + padding: 1px 8px; /* Adjust padding for a more pill-like shape */ + border-radius: 16px; /* Larger border-radius for rounded edges */ + margin-left: 8px; /* Margin to separate from the header text */ + vertical-align: middle; /* Align with the title */ + display: inline-block; /* Use inline-block for better control */ + font-weight: bold; /* Bold text */ + text-transform: capitalize; /* Uppercase text */ + line-height: 1.6; /* Adjust line height for vertical alignment */ +} diff --git a/website/src/components/quickstartGuideCard/index.js b/website/src/components/quickstartGuideCard/index.js index 104bb5cb35b..b13343c8ba7 100644 --- a/website/src/components/quickstartGuideCard/index.js +++ b/website/src/components/quickstartGuideCard/index.js @@ -2,11 +2,14 @@ import React from "react"; import Link from "@docusaurus/Link"; import styles from "./styles.module.css"; import getIconType from "../../utils/get-icon-type"; +import getSvgIcon from "../../utils/get-svg-icon"; export default function QuickstartGuideCard({ frontMatter }) { const { id, title, time_to_complete, icon, tags, level, recently_updated } = frontMatter; + const rightArrow = getSvgIcon('fa-arrow-right') + return ( {recently_updated && ( @@ -21,7 +24,7 @@ export default function QuickstartGuideCard({ frontMatter }) { )} - Start + Start {rightArrow} {(tags || level) && ( @@ -50,7 +53,7 @@ export function QuickstartGuideTitle({ frontMatter }) { Updated )} {time_to_complete && ( - {time_to_complete} + {getSvgIcon('fa-clock')} {time_to_complete} )} {(tags || level) && ( diff --git a/website/src/components/quickstartGuideCard/styles.module.css b/website/src/components/quickstartGuideCard/styles.module.css index 5df40c8479e..cc3e0df7146 100644 --- a/website/src/components/quickstartGuideCard/styles.module.css +++ b/website/src/components/quickstartGuideCard/styles.module.css @@ -29,6 +29,7 @@ [data-theme='dark'] .quickstartCard .icon { color: #fff; + fill: #fff; } .quickstartCard h3 { @@ -64,9 +65,17 @@ color: #fff; } -.quickstartCard .start i { +.quickstartCard .start .right_arrow svg { margin-left: 4px; - font-size: .9rem; + width: 12.6px; + fill: var(--ifm-link-color); +} +.quickstartCard .start:hover .right_arrow svg { + fill: var(--ifm-link-hover-color) +} + +[data-theme='dark'] .quickstartCard .start .right_arrow svg { + fill: #fff; } .quickstartCard .recently_updated { @@ -131,7 +140,16 @@ .infoContainer .time_to_complete { font-weight: 700; - +} + +.infoContainer .time_to_complete svg { + fill: var(--ifm-menu-color); + width: 18px; + margin: 0 4px -2px 0; +} + +[data-theme='dark'] .infoContainer .time_to_complete svg { + fill: #fff; } .infoContainer .recently_updated { diff --git a/website/src/components/quickstartTOC/index.js b/website/src/components/quickstartTOC/index.js index 3ff5e027208..c28d462ceb1 100644 --- a/website/src/components/quickstartTOC/index.js +++ b/website/src/components/quickstartTOC/index.js @@ -6,6 +6,7 @@ import clsx from "clsx"; import style from "./styles.module.css"; import { useLocation, useHistory } from "@docusaurus/router"; import queryString from "query-string"; +import getSvgIcon from "../../utils/get-svg-icon"; function QuickstartTOC() { const history = useHistory(); @@ -81,19 +82,14 @@ function QuickstartTOC() { buttonContainer.classList.add(style.buttonContainer); const prevButton = document.createElement("a"); const nextButton = document.createElement("a"); - const nextButtonIcon = document.createElement("i"); - const prevButtonIcon = document.createElement("i"); - - prevButtonIcon.classList.add("fa-regular", "fa-arrow-left"); - prevButton.textContent = "Back"; - prevButton.prepend(prevButtonIcon); + + prevButton.innerHTML = + ' Back'; prevButton.classList.add(clsx(style.button, style.prevButton)); prevButton.disabled = index === 0; prevButton.addEventListener("click", () => handlePrev(index + 1)); - nextButtonIcon.classList.add("fa-regular", "fa-arrow-right"); - nextButton.textContent = "Next"; - nextButton.appendChild(nextButtonIcon); + nextButton.innerHTML = 'Next '; nextButton.classList.add(clsx(style.button, style.nextButton)); nextButton.disabled = index === stepWrappers.length - 1; nextButton.addEventListener("click", () => handleNext(index + 1)); @@ -204,28 +200,30 @@ function QuickstartTOC() { if (tocListStyles.display === "none") { tocList.style.display = "block"; - tocMenuBtn.querySelector("i").style.transform = "rotate(0deg)"; + tocMenuBtn.querySelector("svg").style.transform = "rotate(0deg)"; } else { tocList.style.display = "none"; - tocMenuBtn.querySelector("i").style.transform = "rotate(-90deg)"; + tocMenuBtn.querySelector("svg").style.transform = "rotate(-90deg)"; } }; return ( <> - Menu -
    - {tocData.map((step) => ( -
  • - {step.stepNumber} {step.title} -
  • - ))} -
+ + Menu {getSvgIcon("fa-caret-down")} + +
    + {tocData.map((step) => ( +
  • + {step.stepNumber} {step.title} +
  • + ))} +
); } diff --git a/website/src/components/quickstartTOC/styles.module.css b/website/src/components/quickstartTOC/styles.module.css index 892e6f73be6..97dd9742756 100644 --- a/website/src/components/quickstartTOC/styles.module.css +++ b/website/src/components/quickstartTOC/styles.module.css @@ -99,6 +99,32 @@ html[data-theme="dark"] .stepWrapper .buttonContainer a:hover { margin-right: .4rem; } +.buttonContainer > a > svg { + width: 11.2px; + fill: var(--color-green-blue); + margin-bottom: -1px; +} + +.buttonContainer > a:hover > svg { + width: 11.2px; + fill: var(--color-white); +} + +html[data-theme="dark"] .buttonContainer > a > svg { + fill: var(--color-green-blue); +} + +html[data-theme="dark"] .buttonContainer > a:hover > svg { + fill: var(--color-white); +} + +.buttonContainer .prevButton svg { + margin-right: .4rem; +} +.buttonContainer .nextButton svg { + margin-left: .4rem; +} + .buttonContainer .nextButton { margin-left: auto; } @@ -111,6 +137,11 @@ html[data-theme="dark"] .stepWrapper .buttonContainer a:hover { .stepWrapper[data-step="1"] a.nextButton { background: var(--color-green-blue); color: var(--color-white); + fill: var(--color-white); +} + +.stepWrapper[data-step="1"] a.nextButton > svg { + fill: var(--color-white); } html[data-theme="dark"] .stepWrapper[data-step="1"] a.nextButton { @@ -129,11 +160,21 @@ html[data-theme="dark"] .stepWrapper[data-step="1"] a.nextButton { display: none; } -.toc_menu_btn i { +.toc_menu_btn i, .toc_menu_btn svg { transform: rotate(-90deg); vertical-align: middle; } +.toc_menu_btn svg { + width: 10px; + fill: var(--ifm-link-color); +} + +.toc_menu_btn:hover svg { + width: 10px; + fill: var(--ifm-link-hover-color); +} + @media (max-width: 996px) { .tocList { width: 100%; diff --git a/website/src/components/searchInput/index.js b/website/src/components/searchInput/index.js index e0a5faf4a82..5cba8b0acf1 100644 --- a/website/src/components/searchInput/index.js +++ b/website/src/components/searchInput/index.js @@ -1,5 +1,6 @@ import React from "react"; import styles from "./styles.module.css"; +import getSvgIcon from "../../utils/get-svg-icon"; const SearchInput = ({ value, @@ -9,7 +10,8 @@ const SearchInput = ({ }) => { return (