From 2234fec0a210138ae8f10bbd4c53e50e427f99a8 Mon Sep 17 00:00:00 2001 From: britt <47340362+traintestbritt@users.noreply.github.com> Date: Fri, 13 Sep 2024 14:26:46 -0700 Subject: [PATCH] updates to the project docs naming and organization + wrote new project setup instructions for dbt cloud --- docs/dbt.md | 39 ++++- docs/new-project-setup.md | 315 -------------------------------------- docs/setup.md | 233 ---------------------------- mkdocs.yml | 9 +- 4 files changed, 36 insertions(+), 560 deletions(-) delete mode 100644 docs/new-project-setup.md delete mode 100644 docs/setup.md diff --git a/docs/dbt.md b/docs/dbt.md index 3dc87053..a2b3a01f 100644 --- a/docs/dbt.md +++ b/docs/dbt.md @@ -1,5 +1,36 @@ # dbt on the Data Services and Engineering team +## New project setup + +To set up a new project on dbt Cloud follow these steps: + +1. Give your new project a name. +1. Click *Advanced settings* and in the *Project subdirectory* field, enter "transform" +1. Select a datawarehouse connection. (e.g. Snowflake, BigQuery, Redshift) +1. For the *Development credentials* section you'll want to: + 1. Under *Auth method* select *Key pair* + 1. Enter your data warehouse username + 1. Enter the private key amd private keypassphrase + 1. For more guidance, read [dbt's docs on connecting to Snowflake via key pair](https://docs.getdbt.com/docs/cloud/connect-data-platform/connect-snowflake#key-pair) +1. Finally click the *Test Connection* button. +1. Connect the apprioriate repository (usually GitHub). Read [dbt's docs on connecting to GitHub](https://docs.getdbt.com/docs/cloud/git/connect-github). + +Once you're through the first five steps you can return to the dbt homepage and click the Settings button in the upper right corner. From there you can follow the steps to configure three environments for Continuous intergation - CI, development, and production. Read [dbt's docs on CI in dbt Cloud](https://docs.getdbt.com/docs/deploy/continuous-integration). Read [dbt's docs on creating production (deployment) environments](https://docs.getdbt.com/docs/deploy/deploy-environments) and [dbt's docs on creating and scheduling deploy jobs](https://docs.getdbt.com/docs/deploy/deploy-jobs#create-and-schedule-jobs). + +You'll also want to [configure notifications for job failures](Configure notifications for job failures). + +Pictured below is an example of environment variables you can set for each environment. For more guidance, read [dbt's docs on environment variables](https://docs.getdbt.com/docs/build/environment-variables). + +![environment variables](images/environment_variables.png) + +## Architecture + +We broadly follow the architecture described in +[this dbt blog post](https://www.getdbt.com/blog/how-we-configure-snowflake/) +for our Snowflake dbt project. + +It is described in more detail in our [Snowflake docs](./snowflake.md#architecture). + ## Naming conventions Models in a data warehouse do not follow the same naming conventions as [raw cloud resources](./naming-conventions.md#general-approach), @@ -19,14 +50,6 @@ We may adopt additional conventions for denoting aggregations, column data types If during the course of a project's model development we determine that simpler human-readable names work better for our partners or downstream consumers, we may drop the above prefixing conventions. -## Architecture - -We broadly follow the architecture described in -[this dbt blog post](https://www.getdbt.com/blog/how-we-configure-snowflake/) -for our Snowflake dbt project. - -It is described in more detail in our [Snowflake docs](./snowflake.md#architecture). - ## Custom schema names dbt's default method for generating [custom schema names](https://docs.getdbt.com/docs/build/custom-schemas) diff --git a/docs/new-project-setup.md b/docs/new-project-setup.md deleted file mode 100644 index 71bc5449..00000000 --- a/docs/new-project-setup.md +++ /dev/null @@ -1,315 +0,0 @@ -# New Project Setup - -The DSE team regularly creates new Snowflake accounts in our Snowflake org. -We do this instead of putting all of our data projects into our main account for a few reasons: - -1. At the end of a project, we often want to transfer account ownership to our partners. - Having it separated from the start helps that process. -1. We frequently want to add our project champion or IT partners to our account as admins. - This is safer if project accounts are separate. -1. We often want to have accounts in a specific cloud and region for compliance or data transfer regions. -1. Different projects may require different approaches to account-level operations like OAuth/SAML. - -Here we document the steps to creating a new Snowflake account from scratch. - -## Prerequisites - -### Obtain permissions in Snowflake - -In order to create a new account, you will need access to the `orgadmin` role. -If you have `accountadmin` in the primary Snowflake account, you can grant it to yourself: - -```sql -USE ROLE accountadmin; -GRANT ROLE orgadmin TO USER ; -``` - -If you later want to revoke the `orgadmin` role from your user or any other, you can do so with: - -```sql -USE ROLE accountadmin; -REVOKE ROLE orgadmin FROM USER ; -``` - -### Get access to AWS - -We typically create our Snowflake architecture using Terraform. -Terraform state is stored in S3 buckets within our AWS account, -so you will need read/write access to those buckets. - -Ask a DSE AWS admin to give you access to these buckets, -and [configure your AWS credentials](./setup.md#aws). - - -### Install terraform dependencies - -You can install Terraform using whatever approach makes sense for your system, -including using `brew` or `conda`. - -Here is a sample for installing the dependencies using `conda`: - -```bash -conda create -n infra python=3.10 # create an environment named 'infra' -conda activate infra # activate the environment -conda install -c conda-forge terraform tflint # Install terraform and tflint -``` - -## Snowflake account setup - -### Create the account - -1. Assume the `ORGADMIN` role -1. Under the "Admin" side panel, go to "Accounts" and click the "+ Account" button: - 1. Select the cloud and region appropriate to the project. The region should be in the United States. - 1. Select "Business Critical" for the Snowflake Edition. - 1. You will be prompted to create an initial user with `ACCOUNTADMIN` privileges. This should be you. - You will be prompted to create a password for your user. Create one using your password manager, - but know that it will ask you change your password upon first log-in. - 1. Save the Account Locator and Account URL for your new account. -1. Log into your new account. You should be prompted to change your password. Save the updated password in your password manager. - -### Enable multi-factor authentication for your user - -1. Ensure the Duo Mobile app is installed on your phone. -1. In the upper-left corner of the Snowsight UI, click on your username, and select "Profile" -1. At the bottom of the dialog, select "Enroll" to enable multi-factor authentication. -1. Follow the instructions to link the new account with your Duo app. - -### Set up key pair authentication - -Certain Snowflake clients don't properly cache MFA tokens, -which means that using them can generate dozens or hundreds of MFA requests on your phone. -At best this makes the tools unusable, and at worst it can lock your Snowflake account. -One example of such a tool is (as of this writing) the Snowflake Terraform Provider. - -The recommended workaround for this is to add a key pair to your account for use with those tools. - -1. Follow the instructions given [here](https://docs.snowflake.com/en/user-guide/key-pair-auth#configuring-key-pair-authentication) - to generate a key pair and add the public key to your account. - Keep the key pair in a secure place on your device. - [This gist](https://gist.github.com/ian-r-rose/1c714ee04be53f7a3fd80322e1a22c27) - contains the bash commands from the instructions, - and can be helpful for quickly creating a new encrypted key pair. - Usage of the script looks like: - ```bash - bash generate_encrypted_key.sh - ``` - You can use `pbcopy < _your_public_key_file_name_.pub` to copy the contents of your public key. - Be sure to remove the `----BEGIN PUBLIC KEY----` and `-----END PUBLIC KEY------` portions - when adding your key to your Snowflake user. -1. In your local `.bash_profile` or an `.env` file, add environment variables for - `SNOWFLAKE_ACCOUNT`, `SNOWFLAKE_USER`, `SNOWFLAKE_PRIVATE_KEY_PATH`, - and (if applicable) `SNOWFLAKE_PRIVATE_KEY_PASSPHRASE`. - -### Apply a session policy - -By default, Snowflake logs out user sessions after four hours of inactivity. -ODI's information security policies prefer that we log out after one hour of inactivity for most accounts, -and after fifteen minutes of inactivity for particularly sensitive accounts. - -!!! note - It's possible we will do this using Terraform in the future, - but at the time of this writing the Snowflake Terraform provider does not support session policies. - -After the Snowflake account is created, run the following script in a worksheet -to set the appropriate session policy: - -```sql -use role sysadmin; -create database if not exists policies; -create session policy if not exists policies.public.account_session_policy - session_idle_timeout_mins = 60 - session_ui_idle_timeout_mins = 60 -; -use role accountadmin; --- alter account unset session policy; -- unset any previously existing session policy -alter account set session policy policies.public.account_session_policy; -``` - -### Add IT-Ops representatives - -TODO: establish and document processes here. - -### Set up Okta SSO and SCIM - -TODO: establish and document processes here. - -## Create project git repository - -Create a new git repository from the CalData Infrastructure Template -following the instructions [here](https://github.com/cagov/caldata-infrastructure-template#usage). - -Once you have created the repository, push it to a remote repository in GitHub. -There are some GitHub actions that will fail because the repository is not yet -configured to work with the new Snowflake account. - -## Deploy project infrastructure using Terraform - -We will create two separate deployments of the project infrastructure, -one for development, and one for production. -In some places we will refer to project name and owner as `` and ``, respectively, -following our [naming conventions](./naming-conventions.md). -You should substitute the appropriate names there. - -### Create the dev configuration - -1. Ensure that your environment has environment variables set for - `SNOWFLAKE_ACCOUNT`, `SNOWFLAKE_USER`, `SNOWFLAKE_PRIVATE_KEY_PATH`, and `SNOWFLAKE_PRIVATE_KEY_PASSPHRASE`. - Make sure you *don't* have any other `SNOWFLAKE_*` variables set, - as they can interfere with authentication. -1. In the new git repository, create a directory to hold the development Terraform configuration: - ```bash - mkdir -p terraform/environments/dev/ - ``` - The location of this directory is by convention, and subject to change. -1. Copy the terraform configuration from - [here](https://github.com/cagov/data-infrastructure/blob/main/terraform/snowflake/environments/dev/main.tf) - to your `dev` directory. -1. In the "elt" module of `main.tf`, change the `source` parameter to point to - `"github.com/cagov/data-infrastructure.git//terraform/snowflake/modules/elt?ref="` - where `` is the short hash of the most recent commit in the `data-infrastructure` repository. -1. In the `dev` directory, create a new backend configuration file called `--dev.tfbackend`. - The file will point to the S3 bucket in which we are storing terraform state. Populate the backend - configuration file with the following (making sure to substitute values for `` and ``): - ```hcl - bucket = "dse-snowflake-dev-terraform-state" - dynamodb_table = "dse-snowflake-dev-terraform-state-lock" - key = "--dev.tfstate" - region = "us-west-2" - ``` -1. In the `dev` directory, create a terraform variables file called `terraform.tfvars`, - and populate the "elt" module variables. These variables may expand in the future, - but at the moment they are just the new Snowflake account locator and the environment - (in this case `"DEV"`): - ```hcl - locator = "" - environment = "DEV" - ``` -1. Initialize the configuration: - ```bash - terraform init -backend-config --dev.tfbackend - ``` -1. Include both Mac and Linux provider binaries in your terraform lock file. - This helps mitigate differences between CI environments and ODI Macs: - ```bash - terraform providers lock -platform=linux_amd64 -platform=darwin_amd64 - ``` -1. Add your new `main.tf`, `terraform.tfvars`, `--dev.tfbackend`, - and terraform lock file to the git repository. Do not add the `.terraform/` directory. - -### Deploy the dev configuration - -1. Ensure that your local environment has environment variables set for `SNOWFLAKE_ACCOUNT`, - `SNOWFLAKE_USER`, `SNOWFLAKE_PRIVATE_KEY_PATH`, and `SNOWFLAKE_PRIVATE_KEY_PASSPHRASE`, - and that they are set to your new account, rather than any other accounts. -1. Run `terraform plan` to see the plan for the resources that will be created. - Inspect the plan to see that everything looks correct. -1. Run `terraform apply` to deploy the configuration. This will actually create the infrastructure! - -### Configure and deploy the production configuration - -Re-run all of the steps above, but in a new directory `terraform/environments/prd`. -Everywhere where there is a `dev` (or `DEV`), replace it with a `prd` (or `PRD`). - -## Set up Sentinel logging - -ODI IT requires that systems log to our Microsoft Sentinel instance -for compliance with security monitoring policies. -The terraform configuration deployed above creates a service account for Sentinel -which needs to be integrated. - -1. Create a password for the Sentinel service account. - In other contexts we prefer key pairs for service accounts, but the Sentinel - integration requires password authentication. In a Snowflake worksheet run: - ```sql - use role securityadmin; - alter user sentinel_svc_user_prd set password = '' - ``` -1. Store the Sentinel service account authentication information in our shared - 1Password vault. - Make sure to provide enough information to disambiguate it from others stored in the vault, - including: - - * The account locator - * The account name (distinct from the account locator) - * The service account name - * The public key - * The private key - -1. Create an IT Help Desk ticket to add the new account to our Sentinel instance. - Share the 1Password item with the IT-Ops staff member who is implementing the ticket. - If you've included all of the above information in the vault item, - it should be all they need. -1. Within fifteen minutes or so of implementation it should be clear whether the integration is working. - IT-Ops should be able to see logs ingesting, and Snowflake account admins should see queries - from the Sentinel service user. - -## Set up CI in GitHub - -The projects generated from our infrastructure template need read access to the -Snowflake account in order to do two things from GitHub actions: - -1. Verify that dbt models in branches compile and pass linter checks -1. Generate dbt docs upon merge to `main`. - -The terraform configurations deployed above create two service accounts -for GitHub actions, a production one for docs and a dev one for CI checks. - -### Add key pairs to the GitHub service accounts - -Set up key pairs for the two GitHub actions service accounts -(`GITHUB_ACTIONS_SVC_USER_DEV` and `GITHUB_ACTIONS_SVC_USER_PRD`). -This follows a similar procedure to what you did for your personal key pair, -though the project template currently does not assume an encrypted key pair. -[This bash script](https://gist.github.com/ian-r-rose/35d49bd253194f57b57e9e59a595bed8) -is a helpful shortcut for generating the key pair: -```bash -bash generate_key.sh -``` - -Once you have created and set the key pairs, add them to the DSE 1Password shared vault. -Make sure to provide enough information to disambiguate the key pair from others stored in the vault, -including: - -* The account locator -* The account name (distinct from the account locator) -* The service account name -* The public key -* The private key - -### Set up GitHub actions secrets - -You need to configure secrets in GitHub actions -in order for the service accounts to be able to connect to your Snowflake account. -From the repository page, go to "Settings", then to "Secrets and variables", then to "Actions". - -Add the following repository secrets: - -| Variable | Value | -|----------|-------| -| `SNOWFLAKE_ACCOUNT` | new account locator | -| `SNOWFLAKE_USER_DEV` | `GITHUB_ACTIONS_SVC_USER_DEV` | -| `SNOWFLAKE_USER_PRD` | `GITHUB_ACTIONS_SVC_USER_PRD` | -| `SNOWFLAKE_PRIVATE_KEY_DEV` | dev service account private key | -| `SNOWFLAKE_PRIVATE_KEY_PRD` | prd service account private key | - -### Enable GitHub pages for the repository - -The repository must have GitHub pages enabled in order for it to deploy and be viewable. - -1. From the repository page, go to "Settings", then to "Pages". -1. Under "GitHub Pages visibility" select "Private" (unless the project is public!). -1. Under "Build and deployment" select "Deploy from a branch" and choose "gh-pages" as your branch. - - -## Tearing down a project - -Upon completion of a project (or if you just went through the above for testing purposes) -there are a few steps needed to tear down the infrastructure. - -1. If the GitHub repository is to be handed off a client, transfer ownership of it to them. - Otherwise, delete or archive the GitHub repository. - If archiving, delete the GitHub actions secrets. -1. Open a Help Desk ticket with IT-Ops to remove Sentinel logging for the Snowflake account. -1. If the Snowflake account is to be handed off to a client, transfer ownership of it to them. - Otherwise, [drop the account](https://docs.snowflake.com/en/user-guide/organizations-manage-accounts-delete). diff --git a/docs/setup.md b/docs/setup.md deleted file mode 100644 index 61377df1..00000000 --- a/docs/setup.md +++ /dev/null @@ -1,233 +0,0 @@ -# Repository setup - -These are instructions for individual contributors to set up the repository locally. -For instructions on how to develop using GitHub Codespaces, see [here](./codespaces.md). - -## Install dependencies - -### 1. Set up a Python virtual environment - -Much of the software in this project is written in Python. -It is usually worthwhile to install Python packages into a virtual environment, -which allows them to be isolated from those in other projects which might have different version constraints. - -One popular solution for managing Python environments is [Anaconda/Miniconda](https://docs.conda.io/en/latest/miniconda.html). -Another option is to use [`pyenv`](https://github.com/pyenv/pyenv). -Pyenv is lighter weight, but is Python-only, whereas conda allows you to install packages from other language ecosystems. - -Here are instructions for setting up a Python environment using Miniconda: - -1. Follow the installation instructions for installing [Miniconda](https://docs.conda.io/en/latest/miniconda.html#system-requirements). -2. Create a new environment called `infra`: - ```bash - conda create -n infra -c conda-forge python=3.10 poetry - ``` - The following prompt will appear, "_The following NEW packages will be INSTALLED:_ " - You'll have the option to accept or reject by typing _y_ or _n_. Type _y_ to continue. -3. Activate the `infra` environment: - ```bash - conda activate infra - ``` - -### 2. Install Python dependencies - -Python dependencies are specified using [`poetry`](https://python-poetry.org/). - -To install them, open a terminal and ensure you are working in the data-infrastructure root folder, then enter the following: - -```bash -poetry install --with dev --no-root -``` - -Any time the dependencies change, you can re-run the above command to update them. - -### 3. Install go dependencies - -We use [Terraform](https://www.terraform.io/) to manage infrastructure. -Dependencies for Terraform (mostly in the [go ecosystem](https://go.dev/)) -can be installed via a number of different package managers. - -If you are running Mac OS, you can install you can install these dependencies with [Homebrew](https://brew.sh/). -First, install Homebrew - -```bash -/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" -``` - -Then install the go dependencies: - -```bash -brew install terraform terraform-docs tflint go -``` - -If you are a conda user on any architecture, you should be able to install these dependencies with: - -```bash -conda install -c conda-forge terraform go-terraform-docs tflint -``` - -## Configure Snowflake - -In order to use Snowflake (as well as the terraform validators for the Snowflake configuration) -you should set some default local environment variables in your environment. -This will depend on your operating system and shell. For Linux and Mac OS systems, -as well as users of Windows subsystem for Linux (WSL) it's often set in -`~/.zshrc`, `~/.bashrc`, or `~/.bash_profile`. - -If you use zsh or bash, open your shell configuration file, and add the following lines: - -**Default Transformer role** - -```bash -export SNOWFLAKE_ACCOUNT= -export SNOWFLAKE_DATABASE=TRANSFORM_DEV -export SNOWFLAKE_USER= -export SNOWFLAKE_PASSWORD= -export SNOWFLAKE_ROLE=TRANSFORMER_DEV -export SNOWFLAKE_WAREHOUSE=TRANSFORMING_XS_DEV -``` - -This will enable you to perform transforming activities which is needed for dbt. -Open a new terminal and verify that the environment variables are set. - -**Switch to Loader role** - -```bash -export SNOWFLAKE_ACCOUNT= -export SNOWFLAKE_DATABASE=RAW_DEV -export SNOWFLAKE_USER= -export SNOWFLAKE_PASSWORD= -export SNOWFLAKE_ROLE=LOADER_DEV -export SNOWFLAKE_WAREHOUSE=LOADING_XS_DEV -``` - -This will enable you to perform loading activities and is needed to which is needed for Airflow or Fivetran. -Again, open a new terminal and verify that the environment variables are set. - -## Configure AWS and GCP (optional) - -### AWS - -In order to create and manage AWS resources programmatically, -you need to create access keys and configure your local setup to use them: - -1. [Install](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) the AWS command-line interface. -1. Go to the AWS IAM console and [create an access key for yourself](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey). -1. In a terminal, enter `aws configure`, and add the access key ID and secret access key when prompted. We use `us-west-2` as our default region. - -## Configure dbt - -The connection information for our data warehouses will, -in general, live outside of this repository. -This is because connection information is both user-specific usually sensitive, -so should not be checked into version control. -In order to run this project locally, you will need to provide this information -in a YAML file located (by default) in `~/.dbt/profiles.yml`. - -Instructions for writing a `profiles.yml` are documented -[here](https://docs.getdbt.com/docs/get-started/connection-profiles), -as well as specific instructions for -[Snowflake](https://docs.getdbt.com/reference/warehouse-setups/snowflake-setup). - -You can verify that your `profiles.yml` is configured properly by running - -```bash -dbt debug -``` - -from a project root directory (`transform`). - -### Snowflake project - -A minimal version of a `profiles.yml` for dbt development with is: - -**ODI users** -```yml -dse_snowflake: - target: dev - outputs: - dev: - type: snowflake - account: - user: - authenticator: externalbrowser - role: TRANSFORMER_DEV - database: TRANSFORM_DEV - warehouse: TRANSFORMING_XS_DEV - schema: DBT_ # Test schema for development - threads: 4 -``` - -**External users** -```yml -dse_snowflake: - target: dev - outputs: - dev: - type: snowflake - account: - user: - password: - authenticator: username_password_mfa - role: TRANSFORMER_DEV - database: TRANSFORM_DEV - warehouse: TRANSFORMING_XS_DEV - schema: DBT_ # Test schema for development - threads: 4 -``` - -!!! note - The target name (`dev`) in the above example can be anything. - However, we treat targets named `prd` differently in generating - custom dbt schema names (see [here](./dbt.md#custom-schema-names)). - We recommend naming your local development target `dev`, and only - include a `prd` target in your profiles under rare circumstances. - -### Combined `profiles.yml` - -You can include profiles for several databases in the same `profiles.yml`, -(as well as targets for production), allowing you to develop in several projects -using the same computer. - -### Example VS Code setup - -This project can be developed entirely using dbt Cloud. -That said, many people prefer to use more featureful editors, -and the code quality checks that are set up here are easier to run locally. -By equipping a text editor like VS Code with an appropriate set of extensions and configurations -we can largely replicate the dbt Cloud experience locally. -Here is one possible configuration for VS Code: - -1. Install some useful extensions (this list is advisory, and non-exhaustive): - * dbt Power User (query previews, compilation, and auto-completion) - * Python (Microsoft's bundle of Python linters and formatters) - * sqlfluff (SQL linter) -1. Configure the VS Code Python extension to use your virtual environment by choosing `Python: Select Interpreter` from the command palette and selecting your virtual environment from the options. -1. Associate `.sql` files with the `jinja-sql` language by going to `Code` -> `Preferences` -> `Settings` -> `Files: Associations`, per [these](https://github.com/innoverio/vscode-dbt-power-user#associate-your-sql-files-the-jinja-sql-language) instructions. -1. Test that the `vscode-dbt-power-user` extension is working by opening one of the project model `.sql` files and pressing the "▶" icon in the upper right corner. You should have query results pane open that shows a preview of the data. - -## Installing `pre-commit` hooks - -This project uses [pre-commit](https://pre-commit.com/) to lint, format, -and generally enforce code quality. These checks are run on every commit, -as well as in CI. - -To set up your pre-commit environment locally run the following in the data-infrastructure repo root folder: - -```bash -pre-commit install -``` - -The next time you make a commit, the pre-commit hooks will run on the contents of your commit -(the first time may be a bit slow as there is some additional setup). - -You can verify that the pre-commit hooks are working properly by running - -```bash -pre-commit run --all-files -``` -to test every file in the repository against the checks. - -Some of the checks lint our dbt models and Terraform configurations, -so having the terraform dependencies installed and the dbt project configured -is a requirement to run them, even if you don't intend to use those packages. diff --git a/mkdocs.yml b/mkdocs.yml index e160fa25..124805f8 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -26,7 +26,7 @@ markdown_extensions: nav: - Introduction: index.md - - Project Setup: setup.md + - Local Environment Setup: local-setup.md - Codespaces: codespaces.md - Code Review: code-review.md - Writing Documentation: writing-documentation.md @@ -34,10 +34,11 @@ nav: - Security Guidelines: security.md - Cloud Infrastructure: cloud-infrastructure.md - Project Architecture: architecture.md - - Snowflake: snowflake.md - - New Project Setup: new-project-setup.md + - Snowflake: + - Snowflake Overview: snowflake.md + - New Project Setup: snowflake-project-setup.md - dbt: - - Overview: dbt.md + - dbt Overview: dbt.md - dbt Performance: dbt-performance.md - dbt Snowflake Project: dbt_docs_snowflake/index.html - Data: