Skip to content

Commit

Permalink
Merge branch 'current' into patch-17
Browse files Browse the repository at this point in the history
  • Loading branch information
mirnawong1 authored Apr 9, 2024
2 parents 58d0b90 + fda2af4 commit fcd5157
Show file tree
Hide file tree
Showing 51 changed files with 545 additions and 245 deletions.
2 changes: 1 addition & 1 deletion website/blog/2023-08-01-announcing-materialized-views.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ config(
on_configuration_change = 'apply',
enable_refresh = True,
refresh_interval_minutes = 30
max_staleness = 60,
max_staleness = 'INTERVAL 60 MINUTE'
)
}}
```
Expand Down
2 changes: 1 addition & 1 deletion website/docs/docs/build/custom-aliases.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ id: "custom-aliases"

## Overview

When dbt runs a model, it will generally create a relation (either a `table` or a `view`) in the database. By default, dbt uses the filename of the model as the identifier for this relation in the database. This identifier can optionally be overridden using the `alias` model configuration.
When dbt runs a model, it will generally create a relation (either a `table` or a `view`) in the database. By default, dbt uses the filename of the model as the identifier for this relation in the database. This identifier can optionally be overridden using the [`alias`](/reference/resource-configs/alias) model configuration.

### Why alias model names?
The names of schemas and tables are effectively the "user interface" of your <Term id="data-warehouse" />. Well-named schemas and tables can help provide clarity and direction for consumers of this data. In combination with [custom schemas](/docs/build/custom-schemas), model aliasing is a powerful mechanism for designing your warehouse.
Expand Down
12 changes: 6 additions & 6 deletions website/docs/docs/build/dimensions.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,10 +112,10 @@ You can use multiple time groups in separate metrics. For example, the `users_cr

```bash
# dbt Cloud users
dbt sl query --metrics users_created,users_deleted --dimensions metric_time --order metric_time
dbt sl query --metrics users_created,users_deleted --group-by metric_time__year --order-by metric_time__year
# dbt Core users
mf query --metrics users_created,users_deleted --dimensions metric_time --order metric_time
mf query --metrics users_created,users_deleted --group-by metric_time__year --order-by metric_time__year
```


Expand All @@ -133,10 +133,10 @@ MetricFlow enables metric aggregation during query time. For example, you can ag

```bash
# dbt Cloud users
dbt sl query --metrics messages_per_month --dimensions metric_time --order metric_time --time-granularity year
dbt sl query --metrics messages_per_month --group-by metric_time__year --order-by metric_time__year
# dbt Core users
mf query --metrics messages_per_month --dimensions metric_time --order metric_time --time-granularity year
mf query --metrics messages_per_month --group-by metric_time__year --order metric_time__year
```

```yaml
Expand Down Expand Up @@ -361,10 +361,10 @@ The following command or code represents how to return the count of transactions

```bash
# dbt Cloud users
dbt sl query --metrics transactions --dimensions metric_time__month,sales_person__tier --order metric_time__month --order sales_person__tier
dbt sl query --metrics transactions --group-by metric_time__month,sales_person__tier --order-by metric_time__month,sales_person__tier
# dbt Core users
mf query --metrics transactions --dimensions metric_time__month,sales_person__tier --order metric_time__month --order sales_person__tier
mf query --metrics transactions --group-by metric_time__month,sales_person__tier --order-by metric_time__month,sales_person__tier
```

Expand Down
3 changes: 2 additions & 1 deletion website/docs/docs/build/jinja-macros.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,8 @@ You can also qualify a macro in your own project by prefixing it with your [pack

Just like well-written python is pythonic, well-written dbt code is dbtonic.

### Favor readability over <Term id="dry" />-ness
### Favor readability over <Term id="dry" />-ness {#favor-readability-over-dry-ness}

Once you learn the power of Jinja, it's common to want to abstract every repeated line into a macro! Remember that using Jinja can make your models harder for other users to interpret — we recommend favoring readability when mixing Jinja with SQL, even if it means repeating some lines of SQL in a few places. If all your models are macros, it might be worth re-assessing.

### Leverage package macros
Expand Down
2 changes: 1 addition & 1 deletion website/docs/docs/build/metrics-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ tags: [Metrics, Semantic Layer]
pagination_next: "docs/build/cumulative"
---

Once you've created your semantic models, it's time to start adding metrics! Metrics can be defined in the same YAML files as your semantic models, or split into separate YAML files into any other subdirectories (provided that these subdirectories are also within the same dbt project repo)
Once you've created your semantic models, it's time to start adding metrics. Metrics can be defined in the same YAML files as your semantic models, or split into separate YAML files into any other subdirectories (provided that these subdirectories are also within the same dbt project repo).

The keys for metrics definitions are:

Expand Down
1 change: 1 addition & 0 deletions website/docs/docs/build/saved-queries.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,7 @@ To define a saved query, refer to the following parameters:
</VersionBlock>

All metrics in a saved query need to use the same dimensions in the `group_by` or `where` clauses.
When using the `Dimension` object, prepend the semantic model name, for example `Dimension('user__ds')`

## Related docs

Expand Down
4 changes: 2 additions & 2 deletions website/docs/docs/build/semantic-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ Semantic models are the foundation for data definition in MetricFlow, which powe
- Think of semantic models as nodes connected by entities in a semantic graph.
- MetricFlow uses YAML configuration files to create this graph for querying metrics.
- Each semantic model corresponds to a dbt model in your DAG, requiring a unique YAML configuration for each semantic model.
- You can create multiple semantic models from a single dbt model, as long as you give each semantic model a unique name.
- You can create multiple semantic models from a single dbt model (SQL or Python), as long as you give each semantic model a unique name.
- Configure semantic models in a YAML file within your dbt project directory.
- Organize them under a `metrics:` folder or within project sources as needed.
- Organize them under a `metrics:` folder or within project sources as needed.

<Lightbox src="/img/docs/dbt-cloud/semantic-layer/semantic_foundation.jpg" width="70%" title="A semantic model is made up of different components: Entities, Measures, and Dimensions."/>

Expand Down
10 changes: 5 additions & 5 deletions website/docs/docs/cloud/billing.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,31 +53,31 @@ Examples of queried metrics include:
- Querying one metric, grouping by one dimension → 1 queried metric

```shell
dbt sl query --metrics revenue --group_by metric_time
dbt sl query --metrics revenue --group-by metric_time
```

- Querying one metric, grouping by two dimensions → 1 queried metric

```shell
dbt sl query --metrics revenue --group_by metric_time,user__country
dbt sl query --metrics revenue --group-by metric_time,user__country
```

- Querying two metrics, grouping by two dimensions → 2 queried metrics

```shell
dbt sl query --metrics revenue,gross_sales --group_by metric_time,user__country
dbt sl query --metrics revenue,gross_sales --group-by metric_time,user__country
```

- Running an explain for one metric → 1 queried metric

```shell
dbt sl query --metrics revenue --group_by metric_time --explain
dbt sl query --metrics revenue --group-by metric_time --explain
```

- Running an explain for two metrics → 2 queried metrics

```shell
dbt sl query --metrics revenue,gross_sales --group_by metric_time --explain
dbt sl query --metrics revenue,gross_sales --group-by metric_time --explain
```

### Viewing usage in the product
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@ sidebar_label: "Connect Microsoft Fabric"

## Supported authentication methods
The supported authentication methods are:
- Azure Active Directory (Azure AD) service principal
- Azure AD password
- Microsoft Entra service principal
- Microsoft Entra password

SQL password (LDAP) is not supported in Microsoft Fabric Synapse Data Warehouse so you must use Azure AD. This means that to use [Microsoft Fabric](https://www.microsoft.com/en-us/microsoft-fabric) in dbt Cloud, you will need at least one Azure AD service principal to connect dbt Cloud to Fabric, ideally one service principal for each user.
SQL password (LDAP) is not supported in Microsoft Fabric Synapse Data Warehouse so you must use Microsoft Entra ID. This means that to use [Microsoft Fabric](https://www.microsoft.com/en-us/microsoft-fabric) in dbt Cloud, you will need at least one Microsoft Entra service principal to connect dbt Cloud to Fabric, ideally one service principal for each user.

### Active Directory service principal
The following are the required fields for setting up a connection with a Microsoft Fabric using Azure AD service principal authentication.
### Microsoft Entra service principal
The following are the required fields for setting up a connection with a Microsoft Fabric using Microsoft Entra service principal authentication.

| Field | Description |
| --- | --- |
Expand All @@ -25,18 +25,18 @@ The following are the required fields for setting up a connection with a Microso
| **Client secret** | The service principal's **client secret** (not the **client secret id**). |


### Active Directory password
### Microsoft Entra password

The following are the required fields for setting up a connection with a Microsoft Fabric using Azure AD password authentication.
The following are the required fields for setting up a connection with a Microsoft Fabric using Microsoft Entra password authentication.

| Field | Description |
| --- | --- |
| **Server** | The server hostname to connect to Microsoft Fabric. |
| **Port** | The server port. You can use `1433` (the default), which is the standard SQL server port number. |
| **Database** | The database name. |
| **Authentication** | Choose **Active Directory Password** from the dropdown. |
| **User** | The AD username. |
| **Password** | The AD username's password. |
| **User** | The Microsoft Entra username. |
| **Password** | The Microsoft Entra password. |

## Configuration

Expand Down
Loading

0 comments on commit fcd5157

Please sign in to comment.