This is a sandbox project for exploring the basic functionality and latest features of dbt. It's based on a fictional restaurant called the Jaffle Shop that serves jaffles. Enjoy!
-
Follow the steps to create a new repository.
- Set up a dbt Cloud account and follow Step 4 in the Quickstart instructions for your data platform, to connect your platform to dbt Cloud.
- Choose the repo you created in Step 1 as the repository for your dbt Project code.
- Click
Develop
in the top nav, you should be prompted to run adbt deps
, which you should do.
-
Install the recommend extensions when prompted unless you have set preferences here.
-
Run
task install-cloud
1 in the integrated terminal.
-
Install the recommend extensions when prompted unless you have set preferences here.
-
Run
task install-core
2 in the integrated terminal.
If you know what you're doing, you can use this repo with any local or cloud database with a dbt adapter. We can't offer support for this setup, but the general steps should be as follows:
- Clone the new repository to your local machine or open it in a GitHub Codespace.
- Run
task venv
.3 - Run
source .venv/bin/activate && exec $SHELL
- Run
task install-core
.2 - Live your life!
Once your project is set up, use the following steps to get the project ready for whatever you'd like to do with it.
- Run
dbt seed
to load the sample data into your raw schema. - Delete the
jaffle-data
directory now that the raw data is loaded into the warehouse.
- Run
task setup
.4 - Run a
dbt build
to build your project. - Ready to run whatever you want!
Footnotes
-
This will install the dbt Cloud CLI [currently in beta] as well as the python packages necessary for running MetricFlow queries, linting your code, and other tasks. ↩
-
This will install dbt Core and the DuckDB adapter as well as the python packages necessary for running MetricFlow queries, linting your code, and other tasks. ↩ ↩2
-
This will create a virtual environment called
.venv
. ↩ -
This will run a
dbt seed
thenrm -rf jaffle-data
, deleting the sample data now that it's loaded into your raw schema. ↩