Skip to content

Latest commit

 

History

History
86 lines (58 loc) · 4.81 KB

File metadata and controls

86 lines (58 loc) · 4.81 KB

Week 6 & 7 — Deploying Containers / Solving CORS with a Load Balancer and Custom Domain

Quick Links

Weekly Outcome

Week 6

  • Being able to push and tag container images to remote repository
  • Practical knowledge of deploying, configuring and updating a serverless container
  • Basic knowledge of working with a cloud CLI

Week 7

  • Working with DNS records and hosted zones
  • Configuring TLS termination at the load balancer
  • Deploying and configuring a load balancer for multiple subdomains
  • Basic understanding of solving CORS issues and backend-to-frontend communication

Weekly Summary

Reflection

This week was okay. I didn't have any trouble with the bash scripts. I changed them to be more modular so I didn't have to use seperate utility scripts for frontend and backend.

Challenges

I had a problem in Gitpod where the frontend and backend ports upon a docker-compose would be stuck on "detecting." I attempted to use an earlier gitpod workspace image, but the issue persisted. I looked at the Gitpod docs on ports here and ran the command gp ports list. It showed the frontend and backend ports listed as "open on localhost" instead of "open (public)". I used a command to open the preview url gp preview $(gp url 3000) --external, as shown in the Gitpod docs here, and was able to continue with development.


Knowledge Transfer

Key Takeaways

  • It's better to use the ECR for pulling images than DockerHub when launching services from a container registry.
  • Due to the "one process/concern per container" design, there is no benefit to using multithreaded webservers that were used on virtual machines, such as Gunicorn. You should scale out the containers as needed rather than have all the processes run on one container needlessly.
    • In a container, consider a vCPU as a CPU with the provisioned cycles/Ghz rather than thinking of it in terms of threads and hyperthreading.
  • Most of the logging, scaling, etc is built into FARGATE's ochestration itself. (The infrastructure for FARGATE is managed by AWS rather than needing to be setup by the user.)
  • On FARGATE, you can only use AWSVPC mode. ECS you can use host, bridge, etc.
    • In AWSVPC network mode, you can make different security groups per service on the ENI.
  • Service Connect expands on App Mesh and runs Cloud Map in the background to make configuring Envoy (proxy) more manageable at scale.

Questions

Q. Why does X-Ray need to be on both services?


Required Homework

ECR Repo

I created a repo for cruddur-python, but I didn't want to add my account id in the commited dockerfile and I was concerned about having to maintain that container myself to periodically update security vulnerabilities since the last pushed tag.

Instead, I used the Docker's official images in the public registry on ECR. It appeared to build correctly and work with the health-check.

I still created and pushed the private repo for backend-flask.

Deploy Services on Cruddur ECS Cluster

Backend Container Deployed on ECS

Configure Application Load Balancer

Configured ALB for Cruddur ALB Target Groups Connected through ALB

Setup Custom Domain using Route53

Cruddur on Custom Domain

Configure Task Definitions for X-Ray

Backend Service with X-Ray Task Frontend Service with X-Ray Task