An opinionated infrastructure-forward modern web template built for continuous integration and delivery; using Docker, AWS Fargate, and NodeJS
Created in part with reference to the following guides:
- https://aws.amazon.com/blogs/opensource/github-actions-aws-fargate/
- https://itnext.io/run-your-containers-on-aws-fargate-c2d4f6a47fda
- https://medium.com/@ariklevliber/aws-fargate-from-start-to-finish-for-a-nodejs-app-9a0e5fbf6361
- https://docs.aws.amazon.com/AmazonECS/latest/userguide/create-application-load-balancer.html
This is kind of a big question. Fundamentally, though, I think that 90% of tutorials, templates, and 'get started' guides only ever take you about half the way there (if that!) and as a relative newcomer to scalable, production-quality infrastructure, I could have really benefited from a guide like this one.
For a more detailed breakdown, see this piece-by-piece rationale.
Here's what to expect.
- Served by NodeJS using the Koa framework
- Rendered by React
- Styled with Emotion
- Tested with Jest
- Linted, prettified, and written in highly safe and readable TypeScript
- Packaged into a Docker image on deploy
Note that you do not need Docker installed to run the application in development, but you will likely want to have it eventually to customize your containers.
- Clone the repository
- Run
npm install
- Make a copy of the
template.env
file provided and rename it as your local.env
file (this lets you connect to your local MySQL server) - Create a
demodb
schema and run thedemodb.sql
file against your local MySQL server - Run
npm run dev
to start the development servers and begin hacking
One of the most critical pieces of development is easily being able to step through code. There are two ways to step through the server code:
- Attach to the currently running process, by running the
Attach to Server
configuration from VS Code's debug menu. - Stop the server (
pm2 stop server
) and then run theLaunch Debug Server
configuration from VS Code's debug menu (configured in the launch.json file) in order to step through the server code.
When you're ready to deploy your application to a 'production-like' staging environment, follow the steps below (Infrastructure) to set up AWS resources for that environment (you will need to repeat these steps for your production environment). Once the resources are available, configure the necessary environment variables as secrets in the AWS Secrets Manager and replace the [[arn]]
fields in your task-def-staging.json
file.
Test the deploy by going to your GitHub repository and navigating to Actions > Deploy to Staging > Run workflow and hitting the green button to run the workflow.
Once you've verified that your manual deploys are working, I'd recommend changing the run condition in .github/workflow/deploy_staging.yml
to run the staging deploy on every push to the main
branch. The production deploy trigger should always be manual.
You will need the aws command line tool installed to execute these steps.
- Create a new IAM Group "DeployGroup" with the following policies:
- AmazonEC2ContainerRegistryFullAccess
- AmazonECS_FullAccess
- Add a new IAM User "GitHub" to the DeployGroup
See Tutorial: Creating a VPC with Public and Private Subnets for Your Clusters.
- Create a new security group for your fargate cluster (e.g. darkbridge-fargate-staging-sg)
- Ensure the security group is the same as Fargate's
- Create a new Application Load Balancer
- Set the target type to IP
aws ecr create-repository --repository-name darkbridge_registry --region us-east-1
These first three steps are necessary for the GitHub Actions workflow to succeed.
Ensure that the ecsTaskExecutionRole
role is available and can be assumed by the GitHub workflow as described here.
- Create a role (if it does not already exist) called
ecsTaskExecutionRole
with theAmazonECSTaskExecutionRolePolicy
policy - Also add the
SecretsManagerReadWrite
policy - Replace the trust relationship with the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Now register the task definition:
aws ecs register-task-definition --region us-east-1 --cli-input-json file://$HOME/darkbridge/task-def-staging.json
aws ecs create-cluster --region us-east-1 --cluster-name darkbridge-cluster-staging
Make sure to specify the cluster for your service
Also note that you have to associate the load balancer with the service at the time of the service creation
E.g.
aws ecs create-service --region us-east-1 --service-name darkbridge-service-staging --task-definition darkbridge-task-staging:1 --desired-count 2 --launch-type "FARGATE" --network-configuration "awsvpcConfiguration={subnets=[ [[private-subnet1]],[[public-subnet1]],[[private-subnet2]],[[public-subnet2]] ],securityGroups=[ [[security-group]] ]}" --load-balancers "targetGroupArn=[[arn]], containerName=darkbridge-container, containerPort=80" --cluster darkbridge-cluster-staging
- Go to the AWS Secrets Manager
- Add an 'Other' > 'Plaintext' Secret
- Name it EXAMPLE_SECRET_VARIABLE and replace the arn in task-def.json
- Add the SecretsManagerReadWrite policy to your task-def's executionRole
See: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-secrets.html
- Make sure the security group has an HTTP inbound rule set to 0.0.0.0 to allow public access
- The ALB must be created and the target group specified before the service is created (see above)
- Similarly the app in the task definition file, the alb, and so on should be named better.
- By default, Fargate containers are limited to 200 MiB of memory; running the server with ts-node for example creates an unstable service since ts-node compiles to memory -- it's much better to compile to disk for production.
- RE: ENV VARIABLES, Fargate only supports secrets that are a single value, not the JSON or key value secrets. So choose OTHER when creating the secret and just put a single text value there.
- If a cluster is expected but not provided, you'll occasionally see a confusing "missing cluster: default" error; this usually means a
--cluster
needs to be specified in the cli command
- Build the image
docker build -t [tag_name] .
- Launch the image in the background, exposing port 80
docker run -d -p 80:80 [tag_name]
- Navigate to
localhost
in your browser
-
Connecting to RDS -
Allow attaching to the server process for debugging -
Optional connecting to Mailgun -
Continuous integration tests run on push - Connecting to S3
- Sourcemaps for production error monitoring
- Use the image output from the staging deploy for the prod deploy