diff --git a/website/docs/guides/teradata-qs.md b/website/docs/guides/teradata-qs.md
index da951620515..63e217ebc2c 100644
--- a/website/docs/guides/teradata-qs.md
+++ b/website/docs/guides/teradata-qs.md
@@ -59,50 +59,48 @@ If you created your Teradata Vantage database instance at https://clearscape.ter
:::
-1. Use your preferred SQL IDE editor to create two databases: `jaffle_shop` and `stripe`:
+1. Use your preferred SQL IDE editor to create database `jaffle_shop`:
```sql
CREATE DATABASE jaffle_shop AS PERM = 1e9;
- CREATE DATABASE stripe AS PERM = 1e9;
```
-2. In the databases `jaffle_shop` and `stripe`, create three foreign tables and reference the respective csv files located in object storage:
-
- ```sql
- CREATE FOREIGN TABLE jaffle_shop.customers (
- id integer,
- first_name varchar (100),
- last_name varchar (100)
- )
- USING (
- LOCATION ('/s3/dbt-tutorial-public.s3.amazonaws.com/jaffle_shop_customers.csv')
- )
- NO PRIMARY INDEX;
-
- CREATE FOREIGN TABLE jaffle_shop.orders (
- id integer,
- user_id integer,
- order_date date,
- status varchar(100)
- )
- USING (
- LOCATION ('/s3/dbt-tutorial-public.s3.amazonaws.com/jaffle_shop_orders.csv')
- )
- NO PRIMARY INDEX;
-
- CREATE FOREIGN TABLE stripe.payment (
- id integer,
- orderid integer,
- paymentmethod varchar (100),
- status varchar (100),
- amount integer,
- created date
- )
- USING (
- LOCATION ('/s3/dbt-tutorial-public.s3.amazonaws.com/stripe_payments.csv')
- )
- NO PRIMARY INDEX;
- ```
+2. In `jaffle_shop` database, create three foreign tables and reference the respective csv files located in object storage:
+
+ ```sql
+ CREATE FOREIGN TABLE jaffle_shop.customers (
+ id integer,
+ first_name varchar (100),
+ last_name varchar (100),
+ email varchar (100)
+ )
+ USING (
+ LOCATION ('/gs/storage.googleapis.com/clearscape_analytics_demo_data/dbt/rawcustomers.csv')
+ )
+ NO PRIMARY INDEX;
+
+ CREATE FOREIGN TABLE jaffle_shop.orders (
+ id integer,
+ user_id integer,
+ order_date date,
+ status varchar(100)
+ )
+ USING (
+ LOCATION ('/gs/storage.googleapis.com/clearscape_analytics_demo_data/dbt/raw_orders.csv')
+ )
+ NO PRIMARY INDEX;
+
+ CREATE FOREIGN TABLE jaffle_shop.payments (
+ id integer,
+ orderid integer,
+ paymentmethod varchar (100),
+ amount integer
+ )
+ USING (
+ LOCATION ('/gs/storage.googleapis.com/clearscape_analytics_demo_data/dbt/raw_payments.csv')
+ )
+ NO PRIMARY INDEX;
+ ```
## Connect dbt cloud to Teradata
@@ -136,12 +134,10 @@ Now that you have a repository configured, you can initialize your project and s
1. Click **Start developing in the IDE**. It might take a few minutes for your project to spin up for the first time as it establishes your git connection, clones your repo, and tests the connection to the warehouse.
2. Above the file tree to the left, click **Initialize your project** to build out your folder structure with example models.
3. Make your initial commit by clicking **Commit and sync**. Use the commit message `initial commit` to create the first commit to your managed repo. Once you’ve created the commit, you can open a branch to add new dbt code.
-4. You can now directly query data from your warehouse and execute `dbt run`. You can try this out now:
- - Click **Create new file**, add this query to the new file, and click **Save as** to save the new file:
- ```sql
- select * from jaffle_shop.customers
- ```
- - In the command line bar at the bottom, enter `dbt run` and click **Enter**. You should see a `dbt run succeeded` message.
+
+## Delete the example models
+
+
## Build your first model
@@ -153,7 +149,7 @@ You have two options for working with files in the dbt Cloud IDE:
Name the new branch `add-customers-model`.
1. Click the **...** next to the `models` directory, then select **Create file**.
-2. Name the file `customers.sql`, then click **Create**.
+2. Name the file `bi_customers.sql`, then click **Create**.
3. Copy the following query into the file and click **Save**.
```sql
@@ -208,7 +204,7 @@ final as (
from customers
- left join customer_orders using (customer_id)
+ left join customer_orders on customers.customer_id = customer_orders.customer_id
)
@@ -222,19 +218,75 @@ You can connect your business intelligence (BI) tools to these views and tables
## Change the way your model is materialized
-
+One of the most powerful features of dbt is that you can change the way a model is materialized in your warehouse, simply by changing a configuration value. You can change things between tables and views by changing a keyword rather than writing the data definition language (DDL) to do this behind the scenes.
-## Delete the example models
+By default, everything gets created as a view. You can override that at the directory level so everything in that directory will materialize to a different materialization.
-
+1. Edit your `dbt_project.yml` file.
+ - Update your project `name` to:
+
+
+ ```yaml
+ name: 'jaffle_shop'
+ ```
+
+
+ - Configure `jaffle_shop` so everything in it will be materialized as a table; and configure `example` so everything in it will be materialized as a view. Update your `models` config block to:
+
+
+
+ ```yaml
+ models:
+ jaffle_shop:
+ +materialized: table
+ example:
+ +materialized: view
+ ```
+
+
+ - Click **Save**.
+
+2. Enter the `dbt run` command. Your `bi_customers` model should now be built as a table!
+ :::info
+ To do this, dbt had to first run a `drop view` statement (or API call on BigQuery), then a `create table as` statement.
+ :::
+
+3. Edit `models/bi_customers.sql` to override the `dbt_project.yml` for the `customers` model only by adding the following snippet to the top, and click **Save**:
+
+
+
+ ```sql
+ {{
+ config(
+ materialized='view'
+ )
+ }}
+
+ with customers as (
+
+ select
+ id as customer_id
+ ...
+
+ )
+
+ ```
+
+
+
+4. Enter the `dbt run` command. Your model, `bi_customers`, should now build as a view.
+
+### FAQs
+
+
+
+
## Build models on top of other models
1. Create a new SQL file, `models/stg_customers.sql`, with the SQL from the `customers` CTE in your original query.
-2. Create a second new SQL file, `models/stg_orders.sql`, with the SQL from the `orders` CTE in your original query.
-
```sql
@@ -248,6 +300,7 @@ You can connect your business intelligence (BI) tools to these views and tables
+2. Create a second new SQL file, `models/stg_orders.sql`, with the SQL from the `orders` CTE in your original query.
```sql
@@ -262,9 +315,9 @@ You can connect your business intelligence (BI) tools to these views and tables
-3. Edit the SQL in your `models/customers.sql` file as follows:
+3. Edit the SQL in your `models/bi_customers.sql` file as follows:
-
+
```sql
with customers as (
@@ -397,4 +450,61 @@ Sources make it possible to name and describe the data loaded into your warehous
-
+## Commit your changes
+
+Now that you've built your customer model, you need to commit the changes you made to the project so that the repository has your latest code.
+
+**If you edited directly in the protected primary branch:**
+1. Click the **Commit and sync git** button. This action prepares your changes for commit.
+2. A modal titled **Commit to a new branch** will appear.
+3. In the modal window, name your new branch `add-customers-model`. This branches off from your primary branch with your new changes.
+4. Add a commit message, such as "Add customers model, tests, docs" and and commit your changes.
+5. Click **Merge this branch to main** to add these changes to the main branch on your repo.
+
+
+**If you created a new branch before editing:**
+1. Since you already branched out of the primary protected branch, go to **Version Control** on the left.
+2. Click **Commit and sync** to add a message.
+3. Add a commit message, such as "Add customers model, tests, docs."
+4. Click **Merge this branch to main** to add these changes to the main branch on your repo.
+
+## Deploy dbt
+
+Use dbt Cloud's Scheduler to deploy your production jobs confidently and build observability into your processes. You'll learn to create a deployment environment and run a job in the following steps.
+
+### Create a deployment environment
+
+1. In the upper left, select **Deploy**, then click **Environments**.
+2. Click **Create Environment**.
+3. In the **Name** field, write the name of your deployment environment. For example, "Production."
+4. In the **dbt Version** field, select the latest version from the dropdown.
+5. Under **Deployment connection**, enter the name of the dataset you want to use as the target, such as "Analytics". This will allow dbt to build and work with that dataset. For some data warehouses, the target dataset may be referred to as a "schema".
+6. Click **Save**.
+
+### Create and run a job
+
+Jobs are a set of dbt commands that you want to run on a schedule. For example, `dbt build`.
+
+As the `jaffle_shop` business gains more customers, and those customers create more orders, you will see more records added to your source data. Because you materialized the `bi_customers` model as a table, you'll need to periodically rebuild your table to ensure that the data stays up-to-date. This update will happen when you run a job.
+
+1. After creating your deployment environment, you should be directed to the page for a new environment. If not, select **Deploy** in the upper left, then click **Jobs**.
+2. Click **Create one** and provide a name, for example, "Production run", and link to the Environment you just created.
+3. Scroll down to the **Execution Settings** section.
+4. Under **Commands**, add this command as part of your job if you don't see it:
+ * `dbt build`
+5. Select the **Generate docs on run** checkbox to automatically [generate updated project docs](/docs/collaborate/build-and-view-your-docs) each time your job runs.
+6. For this exercise, do _not_ set a schedule for your project to run — while your organization's project should run regularly, there's no need to run this example project on a schedule. Scheduling a job is sometimes referred to as _deploying a project_.
+7. Select **Save**, then click **Run now** to run your job.
+8. Click the run and watch its progress under "Run history."
+9. Once the run is complete, click **View Documentation** to see the docs for your project.
+
+
+Congratulations 🎉! You've just deployed your first dbt project!
+
+
+#### FAQs
+
+
+
+
+