From 869a56d0977712da238ac240e014f883a678f7b4 Mon Sep 17 00:00:00 2001 From: Ben Cassell <98852248+benc-db@users.noreply.github.com> Date: Thu, 30 Nov 2023 12:58:50 -0800 Subject: [PATCH 01/15] Update databricks-configs.md to cover compute per model --- .../resource-configs/databricks-configs.md | 144 ++++++++++++++++++ 1 file changed, 144 insertions(+) diff --git a/website/docs/reference/resource-configs/databricks-configs.md b/website/docs/reference/resource-configs/databricks-configs.md index a3b00177967..8b09eb5326c 100644 --- a/website/docs/reference/resource-configs/databricks-configs.md +++ b/website/docs/reference/resource-configs/databricks-configs.md @@ -361,6 +361,150 @@ insert into analytics.replace_where_incremental + + +## Selecting compute per model + +Beginning in version 1.7.2, you can assign which compute to use on a per-model basis. +To take advantage of this capability, you will need to add compute blocks to your profile: + + + +```yaml + +: + target: # this is the default target + outputs: + : + type: databricks + catalog: [optional catalog name if you are using Unity Catalog] + schema: [schema name] # Required + host: [yourorg.databrickshost.com] # Required + + ### This path is used as the default compute + http_path: [/sql/your/http/path] # Required + + ### New compute section + compute: + + ### Name that you will use to refer to an alternate compute + AltCompute: + http_path: [‘/sql/your/http/path’] # Required of each alternate compute + + ### A third named compute, use whatever name you like + Compute2: + http_path: [‘/some/other/path’] # Required of each alternate compute + ... + + : # additional targets + ... + ### For each target, you need to define the same compute, + ### but you can specify different paths + compute: + + ### Name that you will use to refer to an alternate compute + Compute1: + http_path: [‘/sql/your/http/path’] # Required of each alternate compute + + ### A third named compute, use whatever name you like + Compute2: + http_path: [‘/some/other/path’] # Required of each alternate compute + ... + +``` + + + +The new compute section is a map of user chosen names to objects with an http_path property. +Each compute is keyed by a name which is used in the model definition/configuration to indicate which compute you wish to use for that model/selection of models. + +:::note + +You need to use the same set of names for compute across your outputs, though you may supply different http_paths, allowing you to use different computes in different deployment scenarios. + +::: + +### Specifying the compute for models + +As with many other configuaration options, you can specify the compute for a model in multiple ways, using `databricks_compute`. +In your `dbt_project.yml`, the selected compute can be specified for all the models in a given directory: + + + +```yaml + +... + +models: + +databricks_compute: "Compute1" # use the `Compute1` warehouse/cluster for all models in the project... + my_project: + clickstream: + +databricks_compute: "Compute2" # ...except for the models in the `clickstream` folder, which will use `Compute2`. + +snapshots: + +databricks_compute: "Compute1" # all Snapshot models are configured to use `Compute1`. + +``` + + + +For an individual model the compute can be specified in the model config in your schema file. + + + +```yaml + +models: + - name: table_model + config: + databricks_compute: Compute1 + columns: + - name: id + data_type: int + +``` + + + + +Alternatively the warehouse can be specified in the config block of a model's SQL file. + + + +```sql + +{{ + config( + materialized='table', + databricks_compute='Compute1' + ) +}} +select * from {{ ref('seed') }} + +``` + + + +:::note + +In the absence of a specified compute, we will default to the compute specified by http_path in the top level of the output section in your profile. +This is also the compute that will be used for tasks not associated with a particular model, such as gathering metadata for all tables in a schema. + +::: + +To validate that the specified compute is being used, look for lines in your dbt.log like: + +``` +Databricks adapter ... using default compute resource. +``` + +or + +``` +Databricks adapter ... using compute resource . +``` + + ## Persisting model descriptions From 51ffdc274a6308e044e07cb0ae741ac83482c993 Mon Sep 17 00:00:00 2001 From: Ben Cassell <98852248+benc-db@users.noreply.github.com> Date: Thu, 30 Nov 2023 13:30:49 -0800 Subject: [PATCH 02/15] Update website/docs/reference/resource-configs/databricks-configs.md Co-authored-by: Amy Chen <46451573+amychen1776@users.noreply.github.com> --- website/docs/reference/resource-configs/databricks-configs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/resource-configs/databricks-configs.md b/website/docs/reference/resource-configs/databricks-configs.md index 8b09eb5326c..e4027edaf2e 100644 --- a/website/docs/reference/resource-configs/databricks-configs.md +++ b/website/docs/reference/resource-configs/databricks-configs.md @@ -388,7 +388,7 @@ To take advantage of this capability, you will need to add compute blocks to you compute: ### Name that you will use to refer to an alternate compute - AltCompute: + Compute1: http_path: [‘/sql/your/http/path’] # Required of each alternate compute ### A third named compute, use whatever name you like From 7389ad9fd15212192a79c6a92cc9e7c325fb6825 Mon Sep 17 00:00:00 2001 From: Ben Cassell <98852248+benc-db@users.noreply.github.com> Date: Thu, 30 Nov 2023 13:31:49 -0800 Subject: [PATCH 03/15] Update website/docs/reference/resource-configs/databricks-configs.md Co-authored-by: Amy Chen <46451573+amychen1776@users.noreply.github.com> --- .../docs/reference/resource-configs/databricks-configs.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/website/docs/reference/resource-configs/databricks-configs.md b/website/docs/reference/resource-configs/databricks-configs.md index e4027edaf2e..bb226045788 100644 --- a/website/docs/reference/resource-configs/databricks-configs.md +++ b/website/docs/reference/resource-configs/databricks-configs.md @@ -423,7 +423,14 @@ Each compute is keyed by a name which is used in the model definition/configurat You need to use the same set of names for compute across your outputs, though you may supply different http_paths, allowing you to use different computes in different deployment scenarios. ::: +To configure this inside of dbt Cloud, use the [extended attributes feature](/docs/dbt-cloud-environments#extended-attributes-) on the desired environments. You can input them as such. +```yaml +compute: + Compute1: + http_path:[`/some/other/path'] + Compute2: + http_path:[`/some/other/path'] ### Specifying the compute for models As with many other configuaration options, you can specify the compute for a model in multiple ways, using `databricks_compute`. From 2434bda1646fcf63bf81d7bb61d8bdcee67323a9 Mon Sep 17 00:00:00 2001 From: Ben Cassell <98852248+benc-db@users.noreply.github.com> Date: Thu, 30 Nov 2023 13:32:03 -0800 Subject: [PATCH 04/15] Update website/docs/reference/resource-configs/databricks-configs.md Co-authored-by: Amy Chen <46451573+amychen1776@users.noreply.github.com> --- website/docs/reference/resource-configs/databricks-configs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/resource-configs/databricks-configs.md b/website/docs/reference/resource-configs/databricks-configs.md index bb226045788..800ba7430f0 100644 --- a/website/docs/reference/resource-configs/databricks-configs.md +++ b/website/docs/reference/resource-configs/databricks-configs.md @@ -416,7 +416,7 @@ To take advantage of this capability, you will need to add compute blocks to you The new compute section is a map of user chosen names to objects with an http_path property. -Each compute is keyed by a name which is used in the model definition/configuration to indicate which compute you wish to use for that model/selection of models. +Each compute is keyed by a name which is used in the model definition/configuration to indicate which compute you wish to use for that model/selection of models. We recommend choosing a name that is easily recognized as to what compute resources you're using, such as what the compute resource is named inside of the Databricks UI. :::note From 99a0345d161a6a956669cc1f60f20370937179a8 Mon Sep 17 00:00:00 2001 From: Ben Cassell <98852248+benc-db@users.noreply.github.com> Date: Thu, 30 Nov 2023 16:29:52 -0800 Subject: [PATCH 05/15] Update databricks-configs.md - Adding python discussion. --- .../resource-configs/databricks-configs.md | 25 ++++++++++++++++++- 1 file changed, 24 insertions(+), 1 deletion(-) diff --git a/website/docs/reference/resource-configs/databricks-configs.md b/website/docs/reference/resource-configs/databricks-configs.md index 800ba7430f0..f218cec3172 100644 --- a/website/docs/reference/resource-configs/databricks-configs.md +++ b/website/docs/reference/resource-configs/databricks-configs.md @@ -363,7 +363,7 @@ insert into analytics.replace_where_incremental -## Selecting compute per model +### Selecting compute per model Beginning in version 1.7.2, you can assign which compute to use on a per-model basis. To take advantage of this capability, you will need to add compute blocks to your profile: @@ -511,6 +511,29 @@ or Databricks adapter ... using compute resource . ``` +### Selecting compute for a Python model + +Materializing a python model requires execution of SQL as well as python. +Specifically, if your python model is incremental, the current execution pattern involves executing python to create a staging table that is then merged into your target table using SQL. +The python code needs to run on an all purpose cluster, while the SQL code can run on an all purpose cluster or a SQL Warehouse. +When you specify your `databricks_compute` for a python model, you are currently only specifying which compute to use when running the model-specific SQL. +If you wish to use a different compute for executing the python itself, you must specify an alternate `http_path` in the config for the model: + + + + ```python + +def model(dbt, session): + dbt.config( + http_path="sql/protocolv1/..." + ) + +``` + + + +If your default compute is a SQL Warehouse, you will need to specify an all purpose cluster `http_path` in this way. + ## Persisting model descriptions From c129674e9d9f0eca1421bc7f35228c5e5911f7dc Mon Sep 17 00:00:00 2001 From: Ben Cassell <98852248+benc-db@users.noreply.github.com> Date: Thu, 30 Nov 2023 16:44:04 -0800 Subject: [PATCH 06/15] Update databricks-configs.md - clean up --- .../resource-configs/databricks-configs.md | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/website/docs/reference/resource-configs/databricks-configs.md b/website/docs/reference/resource-configs/databricks-configs.md index f218cec3172..49aa4dd3a84 100644 --- a/website/docs/reference/resource-configs/databricks-configs.md +++ b/website/docs/reference/resource-configs/databricks-configs.md @@ -363,9 +363,11 @@ insert into analytics.replace_where_incremental -### Selecting compute per model +## Selecting compute per model -Beginning in version 1.7.2, you can assign which compute to use on a per-model basis. +Beginning in version 1.7.2, you can assign which compute resource to use on a per-model basis. +For SQL models, you can select a SQL Warehouse (serverless or provisioned) or an all purpose cluster. +For details on how this feature interacts with python models, see [Specifying compute for Python models](#specifying-compute-for-python-models). To take advantage of this capability, you will need to add compute blocks to your profile: @@ -423,14 +425,20 @@ Each compute is keyed by a name which is used in the model definition/configurat You need to use the same set of names for compute across your outputs, though you may supply different http_paths, allowing you to use different computes in different deployment scenarios. ::: -To configure this inside of dbt Cloud, use the [extended attributes feature](/docs/dbt-cloud-environments#extended-attributes-) on the desired environments. You can input them as such. + +To configure this inside of dbt Cloud, use the [extended attributes feature](/docs/dbt-cloud-environments#extended-attributes-) on the desired environments. +You can input like so: ```yaml + compute: Compute1: http_path:[`/some/other/path'] Compute2: http_path:[`/some/other/path'] + +``` + ### Specifying the compute for models As with many other configuaration options, you can specify the compute for a model in multiple ways, using `databricks_compute`. @@ -511,7 +519,7 @@ or Databricks adapter ... using compute resource . ``` -### Selecting compute for a Python model +### Specifying compute for Python models Materializing a python model requires execution of SQL as well as python. Specifically, if your python model is incremental, the current execution pattern involves executing python to create a staging table that is then merged into your target table using SQL. From 1d6bf4997b88f7e99ef1c6fb10cd125a3370545f Mon Sep 17 00:00:00 2001 From: Ben Cassell <98852248+benc-db@users.noreply.github.com> Date: Thu, 30 Nov 2023 16:48:30 -0800 Subject: [PATCH 07/15] Update databricks-configs.md - minor touch ups --- .../docs/reference/resource-configs/databricks-configs.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/website/docs/reference/resource-configs/databricks-configs.md b/website/docs/reference/resource-configs/databricks-configs.md index 49aa4dd3a84..8426846997c 100644 --- a/website/docs/reference/resource-configs/databricks-configs.md +++ b/website/docs/reference/resource-configs/databricks-configs.md @@ -418,7 +418,8 @@ To take advantage of this capability, you will need to add compute blocks to you The new compute section is a map of user chosen names to objects with an http_path property. -Each compute is keyed by a name which is used in the model definition/configuration to indicate which compute you wish to use for that model/selection of models. We recommend choosing a name that is easily recognized as to what compute resources you're using, such as what the compute resource is named inside of the Databricks UI. +Each compute is keyed by a name which is used in the model definition/configuration to indicate which compute you wish to use for that model/selection of models. +We recommend choosing a name that is easily recognized as the compute resources you're using, such as the name of the compute resource inside the Databricks UI. :::note @@ -426,8 +427,7 @@ You need to use the same set of names for compute across your outputs, though yo ::: -To configure this inside of dbt Cloud, use the [extended attributes feature](/docs/dbt-cloud-environments#extended-attributes-) on the desired environments. -You can input like so: +To configure this inside of dbt Cloud, use the [extended attributes feature](/docs/dbt-cloud-environments#extended-attributes-) on the desired environments: ```yaml From ea0f2d8627c894100cbfd26c5c6d3870f6e85b63 Mon Sep 17 00:00:00 2001 From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com> Date: Mon, 4 Dec 2023 13:41:23 -0800 Subject: [PATCH 08/15] Updating QS to reflect service account correct roles --- website/docs/guides/bigquery-qs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/guides/bigquery-qs.md b/website/docs/guides/bigquery-qs.md index c1f632f0621..9cf2447fa52 100644 --- a/website/docs/guides/bigquery-qs.md +++ b/website/docs/guides/bigquery-qs.md @@ -78,7 +78,7 @@ In order to let dbt connect to your warehouse, you'll need to generate a keyfile - Click **Next** to create a new service account. 2. Create a service account for your new project from the [Service accounts page](https://console.cloud.google.com/projectselector2/iam-admin/serviceaccounts?supportedpurview=project). For more information, refer to [Create a service account](https://developers.google.com/workspace/guides/create-credentials#create_a_service_account) in the Google Cloud docs. As an example for this guide, you can: - Type `dbt-user` as the **Service account name** - - From the **Select a role** dropdown, choose **BigQuery Admin** and click **Continue** + - From the **Select a role** dropdown, choose **BigQuery Job User** and **BigQuery Data Editor** roles and click **Continue** - Leave the **Grant users access to this service account** fields blank - Click **Done** 3. Create a service account key for your new project from the [Service accounts page](https://console.cloud.google.com/iam-admin/serviceaccounts?walkthrough_id=iam--create-service-account-keys&start_index=1#step_index=1). For more information, refer to [Create a service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#creating) in the Google Cloud docs. When downloading the JSON file, make sure to use a filename you can easily remember. For example, `dbt-user-creds.json`. For security reasons, dbt Labs recommends that you protect this JSON file like you would your identity credentials; for example, don't check the JSON file into your version control software. From c9047178285e293d54ebe7e175d3509a6d124900 Mon Sep 17 00:00:00 2001 From: Pat Kearns Date: Wed, 6 Dec 2023 09:54:45 +1100 Subject: [PATCH 09/15] Update sso-overview.md You might want to update the documentation to highlight that access with local password authentication will persist unless a departed employees user record is deleted in dbt. --- website/docs/docs/cloud/manage-access/sso-overview.md | 1 + 1 file changed, 1 insertion(+) diff --git a/website/docs/docs/cloud/manage-access/sso-overview.md b/website/docs/docs/cloud/manage-access/sso-overview.md index f613df7907e..e0695708957 100644 --- a/website/docs/docs/cloud/manage-access/sso-overview.md +++ b/website/docs/docs/cloud/manage-access/sso-overview.md @@ -67,4 +67,5 @@ If you have any non-admin users logging into dbt Cloud with a password today: 1. Ensure that all users have a user account in your identity provider and are assigned dbt Cloud so they won’t lose access. 2. Alert all dbt Cloud users that they won’t be able to use a password for logging in anymore unless they are already an Admin with a password. 3. We **DO NOT** recommend promoting any users to Admins just to preserve password-based logins because you will reduce security of your dbt Cloud environment. +4. If an Admin leaves your company, manually delete their user to prevent non-SSO access via username and password. ** From 025bbac6d50ad512388aa48b86778a7827e83801 Mon Sep 17 00:00:00 2001 From: Ly Nguyen Date: Wed, 6 Dec 2023 09:58:35 -0800 Subject: [PATCH 10/15] Release note for GA of extended attrs --- .../74-Dec-2023/external-attributes.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) create mode 100644 website/docs/docs/dbt-versions/release-notes/74-Dec-2023/external-attributes.md diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/external-attributes.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/external-attributes.md new file mode 100644 index 00000000000..25791b66fb1 --- /dev/null +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/external-attributes.md @@ -0,0 +1,16 @@ +--- +title: "Update: Extended attributes is GA" +description: "December 2023: The extended attributes feature is now GA in dbt Cloud. It enables you to override dbt adapter YAML attributes at the environment level." +sidebar_label: "Update: Extended attributes is GA" +sidebar_position: 10 +tags: [Dec-2023] +date: 2023-12-06 +--- + +The extended attributes feature in dbt Cloud is now GA! It allows for an environment level override on any YAML attribute that a dbt adapter accepts in its `profiles.yml`. You can provide a YAML snippet to add or replace any [profile](/docs/core/connect-data-platform/profiles.yml) value. + +To learn more, refer to [Extended attributes](/docs/dbt-cloud-environments#extended-attributes). + +The **Extended Atrributes** text box is available from your environment's settings page: + + From 2a37313cc201d71ec83d03c746dac5482ea05811 Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Wed, 6 Dec 2023 16:42:24 -0500 Subject: [PATCH 11/15] Update sso-overview.md Moving the addition up to security best practices --- website/docs/docs/cloud/manage-access/sso-overview.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/website/docs/docs/cloud/manage-access/sso-overview.md b/website/docs/docs/cloud/manage-access/sso-overview.md index e0695708957..b4954955c8c 100644 --- a/website/docs/docs/cloud/manage-access/sso-overview.md +++ b/website/docs/docs/cloud/manage-access/sso-overview.md @@ -57,8 +57,9 @@ Non-admin users that currently login with a password will no longer be able to d ### Security best practices There are a few scenarios that might require you to login with a password. We recommend these security best-practices for the two most common scenarios: -* **Onboarding partners and contractors** - We highly recommend that you add partners and contractors to your Identity Provider. IdPs like Okta and Azure Active Directory (AAD) offer capabilities explicitly for temporary employees. We highly recommend that you reach out to your IT team to provision an SSO license for these situations. Using an IdP highly secure, reduces any breach risk, and significantly increases the security posture of your dbt Cloud environment. -* **Identity Provider is down -** Account admins will continue to be able to log in with a password which would allow them to work with your Identity Provider to troubleshoot the problem. +* **Onboarding partners and contractors** — We highly recommend that you add partners and contractors to your Identity Provider. IdPs like Okta and Azure Active Directory (AAD) offer capabilities explicitly for temporary employees. We highly recommend that you reach out to your IT team to provision an SSO license for these situations. Using an IdP highly secure, reduces any breach risk, and significantly increases the security posture of your dbt Cloud environment. +* **Identity Provider is down** — Account admins will continue to be able to log in with a password which would allow them to work with your Identity Provider to troubleshoot the problem. +* **Offboarding admins** — When offboarding admins, revoke access to dbt Cloud by deleting the user from your environment; otherwise, they can continue to use username/password credentials to log in. ### Next steps for non-admin users currently logging in with passwords @@ -67,5 +68,5 @@ If you have any non-admin users logging into dbt Cloud with a password today: 1. Ensure that all users have a user account in your identity provider and are assigned dbt Cloud so they won’t lose access. 2. Alert all dbt Cloud users that they won’t be able to use a password for logging in anymore unless they are already an Admin with a password. 3. We **DO NOT** recommend promoting any users to Admins just to preserve password-based logins because you will reduce security of your dbt Cloud environment. -4. If an Admin leaves your company, manually delete their user to prevent non-SSO access via username and password. -** + + From 9372ab347348b403c1f1a0f9ffdecf6f596d5062 Mon Sep 17 00:00:00 2001 From: Doug Beatty <44704949+dbeatty10@users.noreply.github.com> Date: Wed, 6 Dec 2023 18:08:28 -0700 Subject: [PATCH 12/15] Fix menu display for the Jinja `return` function --- website/docs/reference/dbt-jinja-functions/return.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/dbt-jinja-functions/return.md b/website/docs/reference/dbt-jinja-functions/return.md index 43bbddfa2d1..d2069bc9254 100644 --- a/website/docs/reference/dbt-jinja-functions/return.md +++ b/website/docs/reference/dbt-jinja-functions/return.md @@ -1,6 +1,6 @@ --- title: "About return function" -sidebar_variable: "return" +sidebar_label: "return" id: "return" description: "Read this guide to understand the return Jinja function in dbt." --- From 4f19470fb6f3edc77e0f9f08a19754b9861d9a98 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Thu, 7 Dec 2023 10:43:19 -0500 Subject: [PATCH 13/15] add faq --- website/snippets/_sl-faqs.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/website/snippets/_sl-faqs.md b/website/snippets/_sl-faqs.md index 5bc556ae00a..def8f3837f6 100644 --- a/website/snippets/_sl-faqs.md +++ b/website/snippets/_sl-faqs.md @@ -10,6 +10,11 @@ As we refine MetricFlow’s API layers, some users may find it easier to set up their own custom service layers for managing query requests. This is not currently recommended, as the API boundaries around MetricFlow are not sufficiently well-defined for broad-based community use +- **Why is my query limited to 100 rows in the dbt Cloud CLI?** +- The default `limit` for query issues from the dbt Cloud CLI is 100 rows. We set this default to prevent returning unnecessarily large data sets as the dbt Cloud CLI is typically used to query the dbt Semantic Layer during the development process, not for production reporting or to access large data sets. For most workflows, you only need to return a subset of the data. + + However, you can change this limit if needed by setting the `--limit` option in your query. For example, to return 1000 rows, you can run `dbt sl list metrics --limit 1000`. + - **Can I reference MetricFlow queries inside dbt models?** - dbt relies on Jinja macros to compile SQL, while MetricFlow is Python-based and does direct SQL rendering targeting at a specific dialect. MetricFlow does not support pass-through rendering of Jinja macros, so we can’t easily reference MetricFlow queries inside of dbt models. From cde1079ba598000006206f609ee3bd7c25c3b7d5 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Thu, 7 Dec 2023 10:43:58 -0500 Subject: [PATCH 14/15] add faq --- website/docs/docs/build/metricflow-commands.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/website/docs/docs/build/metricflow-commands.md b/website/docs/docs/build/metricflow-commands.md index 7e535e4ea62..fb7146fc09e 100644 --- a/website/docs/docs/build/metricflow-commands.md +++ b/website/docs/docs/build/metricflow-commands.md @@ -556,3 +556,8 @@ Keep in mind that modifying your shell configuration files can have an impact on +
+Why is my query limited to 100 rows in the dbt Cloud CLI? +The default limit for query issues from the dbt Cloud CLI is 100 rows. We set this default to prevent returning unnecessarily large data sets as the dbt Cloud CLI is typically used to query the dbt Semantic Layer during the development process, not for production reporting or to access large data sets. For most workflows, you only need to return a subset of the data.

+However, you can change this limit if needed by setting the --limit option in your query. For example, to return 1000 rows, you can run dbt sl list metrics --limit 1000. +
From 5ef2704ff20d48d7abba48f04901ca74a4faeeb8 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Thu, 7 Dec 2023 10:49:48 -0500 Subject: [PATCH 15/15] fix spacing --- website/docs/docs/build/metricflow-commands.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/build/metricflow-commands.md b/website/docs/docs/build/metricflow-commands.md index fb7146fc09e..e3bb93da964 100644 --- a/website/docs/docs/build/metricflow-commands.md +++ b/website/docs/docs/build/metricflow-commands.md @@ -558,6 +558,6 @@ Keep in mind that modifying your shell configuration files can have an impact on
Why is my query limited to 100 rows in the dbt Cloud CLI? -The default limit for query issues from the dbt Cloud CLI is 100 rows. We set this default to prevent returning unnecessarily large data sets as the dbt Cloud CLI is typically used to query the dbt Semantic Layer during the development process, not for production reporting or to access large data sets. For most workflows, you only need to return a subset of the data.

+The default limit for query issues from the dbt Cloud CLI is 100 rows. We set this default to prevent returning unnecessarily large data sets as the dbt Cloud CLI is typically used to query the dbt Semantic Layer during the development process, not for production reporting or to access large data sets. For most workflows, you only need to return a subset of the data.

However, you can change this limit if needed by setting the --limit option in your query. For example, to return 1000 rows, you can run dbt sl list metrics --limit 1000.