In this homework we'll prepare the environment by creating resources in GCP with Terraform.
In your VM on GCP install Terraform. Copy the files from the course repo here to your VM.
Modify the files as necessary to create a GCP Bucket and Big Query Dataset.
After updating the main.tf and variable.tf files run:
terraform apply
Paste the output of this command into the homework submission form.
Below the output of the command terraform apply
.
var.project
Your GCP Project ID
Enter a value: hopeful-summer-375416
Terraform used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# google_bigquery_dataset.dataset will be created
+ resource "google_bigquery_dataset" "dataset" {
+ creation_time = (known after apply)
+ dataset_id = "trips_data_all"
+ delete_contents_on_destroy = false
+ etag = (known after apply)
+ id = (known after apply)
+ labels = (known after apply)
+ last_modified_time = (known after apply)
+ location = "northamerica-northeast1"
+ project = "hopeful-summer-375416"
+ self_link = (known after apply)
+ access {
+ domain = (known after apply)
+ group_by_email = (known after apply)
+ role = (known after apply)
+ special_group = (known after apply)
+ user_by_email = (known after apply)
+ dataset {
+ target_types = (known after apply)
+ dataset {
+ dataset_id = (known after apply)
+ project_id = (known after apply)
}
}
+ routine {
+ dataset_id = (known after apply)
+ project_id = (known after apply)
+ routine_id = (known after apply)
}
+ view {
+ dataset_id = (known after apply)
+ project_id = (known after apply)
+ table_id = (known after apply)
}
}
}
# google_storage_bucket.data-lake-bucket will be created
+ resource "google_storage_bucket" "data-lake-bucket" {
+ force_destroy = true
+ id = (known after apply)
+ location = "NORTHAMERICA-NORTHEAST1"
+ name = "dtc_data_lake_hopeful-summer-375416"
+ project = (known after apply)
+ public_access_prevention = (known after apply)
+ self_link = (known after apply)
+ storage_class = "STANDARD"
+ uniform_bucket_level_access = true
+ url = (known after apply)
+ lifecycle_rule {
+ action {
+ type = "Delete"
}
+ condition {
+ age = 30
+ matches_prefix = []
+ matches_storage_class = []
+ matches_suffix = []
+ with_state = (known after apply)
}
}
+ versioning {
+ enabled = true
}
+ website {
+ main_page_suffix = (known after apply)
+ not_found_page = (known after apply)
}
}
Plan: 2 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
google_bigquery_dataset.dataset: Creating...
google_storage_bucket.data-lake-bucket: Creating...
google_bigquery_dataset.dataset: Creation complete after 1s [id=projects/hopeful-summer-375416/datasets/trips_data_all]
google_storage_bucket.data-lake-bucket: Creation complete after 1s [id=dtc_data_lake_hopeful-summer-375416]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Thanks to the whole DataTalksClub Team!
Alain