Skip to content

Commit

Permalink
Merge branch 'current' into ly-docs-snowflake-native-app
Browse files Browse the repository at this point in the history
  • Loading branch information
nghi-ly committed Jun 4, 2024
2 parents 91a6578 + d9abfbe commit 6ca3997
Show file tree
Hide file tree
Showing 79 changed files with 76 additions and 988 deletions.
2 changes: 1 addition & 1 deletion website/blog/2021-11-22-dbt-labs-pr-template.md
Original file line number Diff line number Diff line change
Expand Up @@ -252,4 +252,4 @@ Once the file is added, name it whatever you want to make it clear that it’s y

With that, you now have a pull request template in your GitHub repository that can help your team follow analytics engineering best practices.

To dive deeper into how we use it as part of the analytics engineering workflow, check out the free [dbt Fundamentals on-demand course](https://courses.getdbt.com/courses/fundamentals).
To dive deeper into how we use it as part of the analytics engineering workflow, check out the free [dbt Fundamentals on-demand course](https://learn.getdbt.com/courses/dbt-fundamentals).
2 changes: 1 addition & 1 deletion website/blog/2021-11-22-primary-keys.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,4 +140,4 @@ You can query out primary key columns from the `pg_index` and `pg_attribute` adm

## Have you started testing primary keys yet?

If you’re looking for a deeper dive on testing primary keys, definitely check out the [dbt Fundamentals course](https://courses.getdbt.com/courses/fundamentals), which includes a full section with examples + practice on data testing in dbt.
If you’re looking for a deeper dive on testing primary keys, definitely check out the [dbt Fundamentals course](https://learn.getdbt.com/courses/dbt-fundamentals), which includes a full section with examples + practice on data testing in dbt.
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ We’re going to:

**Project Appearance**

Let's check in on the growth of [our project](https://github.com/dbt-labs/dbt-project-maturity/tree/main/2-toddlerhood). We've broken some of our logic into its own model — our original script had repetitive logic in <Term id="subquery">subqueries</Term>, now it's following a key principle of analytics engineering: <Term id="dry">Don't Repeat Yourself (DRY)</Term>. For more information on how to refactor your SQL queries for Modularity - check out our [free on-demand course](https://courses.getdbt.com/courses/refactoring-sql-for-modularity).
Let's check in on the growth of [our project](https://github.com/dbt-labs/dbt-project-maturity/tree/main/2-toddlerhood). We've broken some of our logic into its own model — our original script had repetitive logic in <Term id="subquery">subqueries</Term>, now it's following a key principle of analytics engineering: <Term id="dry">Don't Repeat Yourself (DRY)</Term>. For more information on how to refactor your SQL queries for Modularity - check out our [free on-demand course](https://learn.getdbt.com/courses/refactoring-sql-for-modularity).

We also added our first [YML files](https://circleci.com/blog/what-is-yaml-a-beginner-s-guide/). Here, we have one yml file to [configure our sources](https://github.com/dbt-labs/dbt-project-maturity/blob/main/2-toddlerhood/models/source.yml), and one one yml file to [describe our models](https://github.com/dbt-labs/dbt-project-maturity/blob/main/2-toddlerhood/models/schema.yml). We're just starting with basic declarations of our sources, <Term id="primary-key" /> testing using dbt built in tests, and a model-level description -- these are the first steps of a project just learning to walk!

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,7 @@ This is where the power of dbt modeling really comes in! dbt allows you to break
The following are some methods I’ve used in order to properly optimize run times, leveraging dbt’s ability to modularize models.

:::note Note
I won’t get into our modeling methodology at dbt Labs in this article, but there are [plenty of resources](https://courses.getdbt.com/) to understand what might be happening in the following DAGs!
I won’t get into our modeling methodology at dbt Labs in this article, but there are [plenty of resources](https://learn.getdbt.com/) to understand what might be happening in the following DAGs!
:::

### Staggered joins
Expand Down
2 changes: 1 addition & 1 deletion website/blog/2022-07-19-migrating-from-stored-procs.md
Original file line number Diff line number Diff line change
Expand Up @@ -221,5 +221,5 @@ dbt Labs has developed a number of related resources you can use to learn more a

- [Refactoring legacy SQL to dbt](https://docs.getdbt.com/tutorial/refactoring-legacy-sql)
- [The case for the ELT workflow](https://www.getdbt.com/analytics-engineering/case-for-elt-workflow/)
- [Refactoring SQL for modularity](https://courses.getdbt.com/courses/refactoring-sql-for-modularity)
- [Refactoring SQL for modularity](https://learn.getdbt.com/courses/refactoring-sql-for-modularity)
- [Data modeling techniques for modularity](https://www.getdbt.com/analytics-engineering/modular-data-modeling-technique/)
2 changes: 1 addition & 1 deletion website/blog/2022-11-30-dbt-project-evaluator.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Throughout these engagements, we began to take note of the common issues many an

Maybe your team is facing some of these issues right now 👀 And that’s okay! We know that building an effective, scalable dbt project takes a lot of effort and brain power. Maybe you’ve inherited a legacy dbt project with a mountain of tech debt. Maybe you’re starting from scratch. Either way it can be difficult to know the best way to set your team up for success. Don’t worry, you’re in the right place!

Through solving these problems over and over, the Professional Services team began to hone our best practices for working with dbt and how analytics engineers could improve their dbt project. We added “solutions reviews'' to our list of service offerings — client engagements in which we evaluate a given dbt project and provide specific recommendations to improve performance, save developer time, and prevent misuse of dbt’s features. And in an effort to share these best practices with the wider dbt community, we developed a *lot* of content. We wrote articles on the Developer Blog (see [1](https://docs.getdbt.com/blog/on-the-importance-of-naming), [2](https://discourse.getdbt.com/t/your-essential-dbt-project-checklist/1377), and [3](https://docs.getdbt.com/best-practices/how-we-structure/1-guide-overview)), gave [Coalesce talks](https://www.getdbt.com/coalesce-2020/auditing-model-layers-and-modularity-with-your-dag/), and created [training courses](https://courses.getdbt.com/courses/refactoring-sql-for-modularity).
Through solving these problems over and over, the Professional Services team began to hone our best practices for working with dbt and how analytics engineers could improve their dbt project. We added “solutions reviews'' to our list of service offerings — client engagements in which we evaluate a given dbt project and provide specific recommendations to improve performance, save developer time, and prevent misuse of dbt’s features. And in an effort to share these best practices with the wider dbt community, we developed a *lot* of content. We wrote articles on the Developer Blog (see [1](https://docs.getdbt.com/blog/on-the-importance-of-naming), [2](https://discourse.getdbt.com/t/your-essential-dbt-project-checklist/1377), and [3](https://docs.getdbt.com/best-practices/how-we-structure/1-guide-overview)), gave [Coalesce talks](https://www.getdbt.com/coalesce-2020/auditing-model-layers-and-modularity-with-your-dag/), and created [training courses](https://learn.getdbt.com/courses/refactoring-sql-for-modularity).

TIme and time again, we found that when teams are aligned with these best practices, their projects are more:

Expand Down
2 changes: 1 addition & 1 deletion website/blog/2023-03-30-guide-to-debug-in-jinja.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ date: 2023-03-29
is_featured: true
---

*Editor's note—this post assumes intermediate knowledge of Jinja and macros development in dbt. For an introduction to Jinja in dbt check out [the documentation](https://docs.getdbt.com/docs/build/jinja-macros) and the free self-serve course on [Jinja, Macros, Pacakages](https://courses.getdbt.com/courses/jinja-macros-packages).*
*Editor's note—this post assumes intermediate knowledge of Jinja and macros development in dbt. For an introduction to Jinja in dbt check out [the documentation](https://docs.getdbt.com/docs/build/jinja-macros) and the free self-serve course on [Jinja, Macros, Packages](https://learn.getdbt.com/courses/jinja-macros-and-packages).*

Jinja brings a lot of power to dbt, allowing us to use `ref()`, `source()` , conditional code, and [macros](https://docs.getdbt.com/docs/build/jinja-macros). But, while Jinja brings flexibility, it also brings complexity, and like many times with code, things can run in expected ways.

Expand Down
16 changes: 0 additions & 16 deletions website/dbt-versions.js
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,6 @@ exports.versions = [
version: "1.5",
EOLDate: "2024-04-27",
},
{
version: "1.4",
EOLDate: "2024-01-25",
},
{
version: "1.3",
EOLDate: "2023-10-12",
},
]

exports.versionedPages = [
Expand Down Expand Up @@ -154,14 +146,6 @@ exports.versionedPages = [
"page": "reference/resource-properties/versions",
"firstVersion": "1.5",
},
{
"page": "reference/dbt-jinja-functions/local-md5",
"firstVersion": "1.4",
},
{
"page": "reference/warehouse-setups/fal-setup",
"firstVersion": "1.3",
},
{
"page": "reference/resource-configs/on_configuration_change",
"firstVersion": "1.6",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Ready to start transforming your Unity Catalog datasets with dbt?
Check out the resources below for guides, tips, and best practices:

- [How we structure our dbt projects](/best-practices/how-we-structure/1-guide-overview)
- [Self-paced dbt fundamentals training videos](https://courses.getdbt.com/courses/fundamentals)
- [Self-paced dbt fundamentals training course](https://learn.getdbt.com/courses/dbt-fundamentals)
- [Customizing CI/CD](/guides/custom-cicd-pipelines)
- [Debugging errors](/guides/debug-errors)
- [Writing custom generic tests](/best-practices/writing-custom-generic-tests)
Expand Down
2 changes: 1 addition & 1 deletion website/docs/docs/build/build-metrics-intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,5 +67,5 @@ MetricFlow allows you to:

- [Quickstart guide with the dbt Semantic Layer](/guides/sl-snowflake-qs)
- [The dbt Semantic Layer: what's next](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/) blog
- [dbt Semantic Layer on-demand courses](https://courses.getdbt.com/courses/semantic-layer)
- [dbt Semantic Layer on-demand course](https://learn.getdbt.com/courses/semantic-layer)
- [dbt Semantic Layer FAQs](/docs/use-dbt-semantic-layer/sl-faqs)
34 changes: 0 additions & 34 deletions website/docs/docs/build/custom-aliases.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,31 +73,6 @@ To override dbt's alias name generation, create a macro named `generate_alias_na

The default implementation of `generate_alias_name` simply uses the supplied `alias` config (if present) as the model alias, otherwise falling back to the model name. This implementation looks like this:

<VersionBlock lastVersion="1.4">

<File name='get_custom_alias.sql'>

```jinja2
{% macro generate_alias_name(custom_alias_name=none, node=none) -%}
{%- if custom_alias_name is none -%}
{{ node.name }}
{%- else -%}
{{ custom_alias_name | trim }}
{%- endif -%}
{%- endmacro %}
```

</File>

</VersionBlock>

<VersionBlock firstVersion="1.5">

<File name='get_custom_alias.sql'>
Expand Down Expand Up @@ -176,18 +151,9 @@ If these models should indeed have the same database identifier, you can work ar

#### Model versions

<VersionBlock lastVersion="1.4">

New in v1.5

</VersionBlock>

<VersionBlock firstVersion="1.5">

**Related documentation:**
- [Model versions](/docs/collaborate/govern/model-versions)
- [`versions`](/reference/resource-properties/versions#alias)

By default, dbt will create versioned models with the alias `<model_name>_v<v>`, where `<v>` is that version's unique identifier. You can customize this behavior just like for non-versioned models by configuring a custom `alias` or re-implementing the `generate_alias_name` macro.

</VersionBlock>
45 changes: 0 additions & 45 deletions website/docs/docs/build/exposures.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,6 @@ Exposures make it possible to define and describe a downstream use of your dbt p

Exposures are defined in `.yml` files nested under an `exposures:` key.

<VersionBlock firstVersion="1.4">

<File name='models/<filename>.yml'>

```yaml
Expand Down Expand Up @@ -42,59 +40,16 @@ exposures:
</File>
</VersionBlock>
<VersionBlock lastVersion="1.3">
<File name='models/<filename>.yml'>
```yaml
version: 2

exposures:

- name: weekly_jaffle_report
type: dashboard
maturity: high
url: https://bi.tool/dashboards/1
description: >
Did someone say "exponential growth"?
depends_on:
- ref('fct_orders')
- ref('dim_customers')
- source('gsheets', 'goals')

owner:
name: Callum McData
email: [email protected]
```
</File>
</VersionBlock>
### Available properties
_Required:_
- **name**: a unique exposure name written in [snake case](https://en.wikipedia.org/wiki/Snake_case)
- **type**: one of `dashboard`, `notebook`, `analysis`, `ml`, `application` (used to organize in docs site)
- **owner**: `name` or `email` required; additional properties allowed

<VersionBlock firstVersion="1.4">

_Expected:_
- **depends_on**: list of refable nodes, including `metric`, `ref`, and `source`. While possible, it is highly unlikely you will ever need an `exposure` to depend on a `source` directly.

</VersionBlock>

<VersionBlock lastVersion="1.3">

_Expected:_
- **depends_on**: list of refable nodes, including `ref` and `source` (While possible, it is highly unlikely you will ever need an `exposure` to depend on a `source` directly)

</VersionBlock>

_Optional:_
- **label**: May contain spaces, capital letters, or special characters.
- **url**: Activates and populates the link to **View this exposure** in the upper right corner of the generated documentation site
Expand Down
14 changes: 1 addition & 13 deletions website/docs/docs/build/incremental-strategy.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,16 +47,12 @@ The `merge` strategy is available in dbt-postgres and dbt-redshift beginning in

</VersionBlock>

<VersionBlock firstVersion="1.3">

:::note Snowflake Configurations

dbt v1.3 changed the default materialization for incremental table merges from `temporary table` to `view`. For more information about this change and instructions for setting the configuration to a temp table, please read about [Snowflake temporary tables](/reference/resource-configs/snowflake-configs#temporary-tables).
dbt has changed the default materialization for incremental table merges from `temporary table` to `view`. For more information about this change and instructions for setting the configuration to a temp table, please read about [Snowflake temporary tables](/reference/resource-configs/snowflake-configs#temporary-tables).

:::

</VersionBlock>

### Configuring incremental strategy

The `incremental_strategy` config can either be defined in specific models or
Expand Down Expand Up @@ -90,8 +86,6 @@ select ...

</File>

<VersionBlock firstVersion="1.3">

### Strategy-specific configs

If you use the `merge` strategy and specify a `unique_key`, by default, dbt will entirely overwrite matched rows with new values.
Expand Down Expand Up @@ -134,10 +128,6 @@ select ...

</File>

</VersionBlock>

<VersionBlock firstVersion="1.4">

### About incremental_predicates

`incremental_predicates` is an advanced use of incremental models, where data volume is large enough to justify additional investments in performance. This config accepts a list of any valid SQL expression(s). dbt does not check the syntax of the SQL statements.
Expand Down Expand Up @@ -216,8 +206,6 @@ The syntax depends on how you configure your `incremental_strategy`:
- There's a decent amount of conceptual overlap with the `insert_overwrite` incremental strategy.
:::

</VersionBlock>

### Built-in strategies

Before diving into [custom strategies](#custom-strategies), it's important to understand the built-in incremental strategies in dbt and their corresponding macros:
Expand Down
4 changes: 0 additions & 4 deletions website/docs/docs/build/packages.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,8 +145,6 @@ To find the latest release for a package, navigate to the `Releases` tab in the

As of v0.14.0, dbt will warn you if you install a package using the `git` syntax without specifying a version (see below).

<VersionBlock firstVersion="1.4">

### Internally hosted tarball URL

Some organizations have security requirements to pull resources only from internal services. To address the need to install packages from hosted environments such as Artifactory or cloud storage buckets, dbt Core enables you to install packages from internally-hosted tarball URLs.
Expand All @@ -160,8 +158,6 @@ packages:

Where `name: 'dbt_utils'` specifies the subfolder of `dbt_packages` that's created for the package source code to be installed within.

</VersionBlock>

### Private packages

#### SSH Key Method (Command Line only)
Expand Down
6 changes: 2 additions & 4 deletions website/docs/docs/build/python-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: "Python models"
id: "python-models"
---

dbt Core v1.3 adds support for Python models. Note that only [specific data platforms](#specific-data-platforms) support dbt-py models.
Note that only [specific data platforms](#specific-data-platforms) support dbt-py models.

We encourage you to:
- Read [the original discussion](https://github.com/dbt-labs/dbt-core/discussions/5261) that proposed this feature.
Expand All @@ -16,7 +16,6 @@ We encourage you to:

dbt Python (`dbt-py`) models can help you solve use cases that can't be solved with SQL. You can perform analyses using tools available in the open-source Python ecosystem, including state-of-the-art packages for data science and statistics. Before, you would have needed separate infrastructure and orchestration to run Python transformations in production. Python transformations defined in dbt are models in your project with all the same capabilities around testing, documentation, and lineage.

<VersionBlock firstVersion="1.3">

<File name='models/my_python_model.py'>

Expand Down Expand Up @@ -257,7 +256,7 @@ def model(dbt, session):
### Materializations

Python models support these materializations:
- `table` <VersionBlock firstVersion="1.4">(default)</VersionBlock>
- `table` (default)
- `incremental`

Incremental Python models support all the same [incremental strategies](/docs/build/incremental-strategy) as their SQL counterparts. The specific strategies supported depend on your adapter. As an example, incremental models are supported on BigQuery with Dataproc for the `merge` incremental strategy; the `insert_overwrite` strategy is not yet supported.
Expand Down Expand Up @@ -782,4 +781,3 @@ You can also install packages at cluster creation time by [defining cluster prop

</WHCode>

</VersionBlock>
7 changes: 1 addition & 6 deletions website/docs/docs/build/sql-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,8 @@ If you're new to dbt, we recommend that you read a [quickstart guide](/guides) t

:::

<VersionBlock firstVersion="1.3">
dbt's Python capabilities are an extension of its capabilities with SQL models. If you're new to dbt, we recommend that you read this page first, before reading: ["Python Models"](/docs/build/python-models)

Starting in v1.3, dbt Core adds support for **Python models**.

dbt's Python capabilities are an extension of its capabilities with SQL models. If you're new to dbt, we recommend that you read this page first, before reading: ["Python Models"](/docs/building-a-dbt-project/building-models/python-models)

</VersionBlock>

A SQL model is a `select` statement. Models are defined in `.sql` files (typically in your `models` directory):
- Each `.sql` file contains one model / `select` statement
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,5 +39,5 @@ import AvailIntegrations from '/snippets/_sl-partner-links.md';
- [dbt Semantic Layer API query syntax](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata)
- [Hex dbt Semantic Layer cells](https://learn.hex.tech/docs/logic-cell-types/transform-cells/dbt-metrics-cells) to set up SQL cells in Hex.
- [Resolve 'Failed APN'](/faqs/Troubleshooting/sl-alpn-error) error when connecting to the dbt Semantic Layer.
- [dbt Semantic Layer on-demand courses](https://courses.getdbt.com/courses/semantic-layer)
- [dbt Semantic Layer on-demand course](https://learn.getdbt.com/courses/semantic-layer)
- [dbt Semantic Layer FAQs](/docs/use-dbt-semantic-layer/sl-faqs)
2 changes: 1 addition & 1 deletion website/docs/docs/cloud/about-cloud-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ This portion of our documentation will take you through the various settings in

For steps on installing dbt Cloud development tools, refer to the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation) or the [dbt Cloud IDE (browser-based)](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud).

These settings are intended for dbt Cloud administrators. If you need a more detailed first-time setup guide for specific data platforms, read our [quickstart guides](/guides). If you want a more in-depth learning experience, we recommend taking the dbt Fundamentals on our [dbt Learn online courses site](https://courses.getdbt.com/).
These settings are intended for dbt Cloud administrators. If you need a more detailed first-time setup guide for specific data platforms, read our [quickstart guides](/guides). If you want a more in-depth learning experience, we recommend taking the dbt Fundamentals on our [dbt Learn site](https://learn.getdbt.com/).

## Prerequisites

Expand Down
Loading

0 comments on commit 6ca3997

Please sign in to comment.