upstream-prod
is a dbt package for easily using production data in a development environment. It's a hands-off alternative to the defer flag - only without the need to find and download a production manifest - and was inspired by similar work by Monzo.
In a typical project, prod and dev models are materialised in separate environments. Although this ensures end users are unaffected by ongoing development, there's a significant downside: the isolation means that each environment needs a complete, up-to-date copy of every model. This can be challenging for complex projects or long-running models, and out-of-date data can cause frustrating errors.
upstream-prod
solves this by intelligently redirecting ref
s to prod outputs. It is highly adaptable and can be used whether your environments are in separate schemas, databases, or a combination of both. On most warehouses it can even compare dev and prod outputs and use the most recently-updated relation.
⚠️ Setup instructions changed in version0.8.0
- you'll need to update yourref
macro if upgrading from an earlier version.
ℹ️ If you need help setting up the package, please create an issue or tag / DM @LewisDavies on dbt's Slack.
The package relies on a few variables that indicate where prod data is avaiable. The exact requirements depend on your setup; use the questions below to find the correct variables for your project.
If you aren't sure, check your macros
directory for a macro called generate_schema_name
. The exact filename may differ - dbt's docs call it get_custom_schema.sql
- so you may need to check the file contents.
Your platform may use a different term, such as catalog on Databricks or project on BigQuery.
Custom schema macro | No custom schema macro | |
---|---|---|
Dev & prod in same database | Setup A | Setup B |
Dev & prod in different databases | Setup C | Setup D |
Setup A
The custom macro requires two small tweaks to work with the package. This is easiest to explain with an example, so here is how to modify the built-in generate_schema_name_for_env
macro.
-- 1. Add an is_upstream_prod parameter that defaults to False
{% macro generate_schema_name(custom_schema_name, node, is_upstream_prod=False) -%}
{%- set default_schema = target.schema -%}
-- 2. In the clause that generates your prod schema names, add a check that the value is True
-- **Make sure to enclose the or condition in brackets**
{%- if (target.name == "prod" or is_upstream_prod == true) and custom_schema_name is not none -%}
{{ custom_schema_name | trim }}
{%- else -%}
{{ default_schema }}
{%- endif -%}
{%- endmacro %}
Add the values below to the vars
section of dbt_project.yml
. Some optional variables are included to improve your experience:
upstream_prod_fallback
tells the package to return your dev relation if the prod version can't be found. This is very useful when creating multiple models at the same time.upstream_prod_prefer_recent
compares when the prod and dev relations were last modified and returns the most recent. This is only available on Snowflake, Databricks & BigQuery.upstream_prod_disabled_targets
is used to bypass the package is certain environments. It is highly recommended to disable the package for prod runs.
# dbt_project.yml
vars:
# Required
upstream_prod_env_schemas: true
# Optional, but recommended
upstream_prod_fallback: true
upstream_prod_prefer_recent: true
upstream_prod_disabled_targets:
- prod
Setup B
Add the values below to the vars
section of dbt_project.yml
. Some optional variables are included to improve your experience:
upstream_prod_fallback
tells the package to return your dev relation if the prod version can't be found. This is very useful when creating multiple models at the same time.upstream_prod_prefer_recent
compares when the prod and dev relations were last modified and returns the most recent. This is only available on Snowflake, Databricks & BigQuery.upstream_prod_disabled_targets
is used to bypass the package is certain environments. It is highly recommended to disable the package for prod runs.
# dbt_project.yml
vars:
# Required
upstream_prod_schema: <prod_schema_name/prefix>
# Optional, but recommended
upstream_prod_fallback: true
upstream_prod_prefer_recent: true
upstream_prod_disabled_targets:
- prod
Setup C
The custom macro requires two small tweaks to work with the package. This is easiest to explain with an example, so here is how to modify the built-in generate_schema_name_for_env
macro.
-- 1. Add an is_upstream_prod parameter that defaults to False
{% macro generate_schema_name(custom_schema_name, node, is_upstream_prod=False) -%}
{%- set default_schema = target.schema -%}
-- 2. In the clause that generates your prod schema names, add a check that the value is True
-- **Make sure to enclose the or condition in brackets**
{%- if (target.name == "prod" or is_upstream_prod == true) and custom_schema_name is not none -%}
{{ custom_schema_name | trim }}
{%- else -%}
{{ default_schema }}
{%- endif -%}
{%- endmacro %}
Add the values below to the vars
section of dbt_project.yml
. Some optional variables are included to improve your experience:
upstream_prod_fallback
tells the package to return your dev relation if the prod version can't be found. This is very useful when creating multiple models at the same time.upstream_prod_prefer_recent
compares when the prod and dev relations were last modified and returns the most recent. This is only available on Snowflake, Databricks & BigQuery.upstream_prod_disabled_targets
is used to bypass the package is certain environments. It is highly recommended to disable the package for prod runs.
# dbt_project.yml
vars:
# Required
upstream_prod_database: <prod_database_name>
upstream_prod_env_schemas: true
# Optional, but recommended
upstream_prod_fallback: true
upstream_prod_prefer_recent: true
upstream_prod_disabled_targets:
- prod
Advanced: projects with multiple prod & dev databases
If you project materialises models in more than one database per env, use upstream_prod_database_replace
instead of upstream_prod_database
. You can then provide a two-item list with values to find and their replacement strings.
For example, a project that materialises models/marts
in one database and everything else in another would use 4 databases:
- During development
models/marts
→dev_marts_db
- Everything else →
dev_stg_db
- In production
models/marts
→prod_marts_db
- Everything else →
prod_stg_db
Setting upstream_prod_database_replace: [dev, prod]
would allow the package to work with this project.
Setup D
Add the values below to the vars
section of dbt_project.yml
. Some optional variables are included to improve your experience:
upstream_prod_fallback
tells the package to return your dev relation if the prod version can't be found. This is very useful when creating multiple models at the same time.upstream_prod_prefer_recent
compares when the prod and dev relations were last modified and returns the most recent. This is only available on Snowflake, Databricks & BigQuery.upstream_prod_disabled_targets
is used to bypass the package is certain environments. It is highly recommended to disable the package for prod runs.
# dbt_project.yml
vars:
# Required
upstream_prod_database: <prod_database_name>
upstream_prod_schema: <prod_schema_name/prefix>
# Optional, but recommended
upstream_prod_fallback: true
upstream_prod_prefer_recent: true
upstream_prod_disabled_targets:
- prod
Advanced: projects with multiple prod & dev databases
If you project materialises models in more than one database per env, use upstream_prod_database_replace
instead of upstream_prod_database
. You can then provide a two-item list with values to find and their replacement strings.
For example, a project that materialises models/marts
in one database and everything else in another would use 4 databases:
- During development
models/marts
→dev_marts_db
- Everything else →
dev_stg_db
- In production
models/marts
→prod_marts_db
- Everything else →
prod_stg_db
Setting upstream_prod_database_replace: [dev, prod]
would allow the package to work with this project.
In your macros
directory, create a file called ref.sql
with the following contents:
{% macro ref(
parent_model,
prod_database=var("upstream_prod_database", None),
prod_schema=var("upstream_prod_schema", None),
enabled=var("upstream_prod_enabled", True),
fallback=var("upstream_prod_fallback", False),
env_schemas=var("upstream_prod_env_schemas", False),
version=None,
prefer_recent=var("upstream_prod_prefer_recent", False),
prod_database_replace=var("upstream_prod_database_replace", None)
) %}
{% do return(upstream_prod.ref(
parent_model,
prod_database,
prod_schema,
enabled,
fallback,
env_schemas,
version,
prefer_recent,
prod_database_replace
)) %}
{% endmacro %}
Alternatively, you can find any instances of {{ ref() }}
in your project and replace them with {{ upstream_prod.ref() }}
. This is suitable for testing the package but is not recommended for general use.
Assume your project has an events
model that depends on intermediate and staging layers. The simplified DAGs looks like this:
graph LR
source[(Source)]
source -.-> prod_stg[stg_events]
source ==> dev_stg[stg_events]
subgraph prod
prod_stg -.-> prod_int[int_events] -.-> prod_mart[events]
end
subgraph dev
dev_stg ==> dev_int[int_events] ==> dev_mart[events]
end
You want to change int_events
, so you need a copy of stg_events
in dev. This could be expensive and time-consuming to create from scratch, and it could slow down your development process considerably. Perhaps this model already exists from previous work, but is it up-to-date? If the model definition or underlying data has changed, your dev model may break in prod.
upstream-prod
sovles this problem by intelligently redirecting ref
s based on the selected models for the current run. Running dbt build -s int_events+
would:
- Create
dev.int_events
using data fromprod.stg_events
- Create
dev.events
on top ofdev.int_events
, since the package recognises thatint_events
has been selected - Run tests against
dev.int_events
anddev.events
Now that your dev models are using prod data, you DAG would look like this:
graph LR
source[(Source)]
source ==> prod_stg[stg_events]
source -.-> dev_stg[stg_events]
subgraph prod
prod_stg -.-> prod_int[int_events] -.-> prod_mart[events]
end
subgraph dev
dev_stg ~~~ dev_int
prod_stg ==> dev_int[int_events] ==> dev_mart[events]
end
upstream-prod
is built and tested on Snowflake. Based on my experience and user reports, it is known to work on:
- Snowflake
- Databricks
- BigQuery
- Redshift (you may need RA3 nodes for cross-database queries)
- Azure Synapse
It should also work with community-supported adapters that specify a target database or schema in profiles.yml
.