Skip to content

atilasantos/s3-file-creation-python

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Stack to create files on S3 buckets within intervals of 5 min

Overview

This repository will cover the following:

  • Provide a containerized app to upload files to a S3 bucket. (folder src)
  • The app is packaged by a helm chart and has variables according to its environments, which are qa and staging. (folder charts/)
  • The underlying infrastructure needed to support the app will also be deployed as IaC, via terraform (folder iac)

Obs: This README.md file was written focused to Linux users, may other OS systems will be covered in the future 😅

Pre-requesites ✔️

  • Have an account on AWS.
  • While creating your IAM, don't forget to download the .csv file with the generated key/secret pair, you'll need them in the further steps.
  • Install Terraform
  • Create a S3 Bucket service on AWS which will be responsible to store the infrastructure state (fix the name if necessary in backend.tf)
  • Have helm and kubectl installed.

Getting started 🚀

  1. Install the aws cli, try execute aws --version right after the instalation, something like the snippet below must be displayed:

    aws-cli/2.0.56 Python/3.7.3 Linux/5.4.0-51-generic exe/x86_64.ubuntu.18

  2. Associate AWS credentials to our local aws CLI so terraform would be able to identify what credentials to use while provisioning the infrastructure

Terraform files

This repository is responsible for provisioning the aimed infrastructure, however it uses two modularized structures, one written by me which is inside the folder iac/modules and another one provided by hashicorp in this section i`ll go through the files I have created.

  • backend.tf: Define a remote versioned backend using S3 bucket service. The S3 bucket must be created before initializing terraform.
  • main.tf: Define how to create the cluster, s3 buckets and s3 bucket lifecycles by providing the required variables from all the necessary modules.
  • output.tf: Define which information will be displayed when the apply is successfully executed.
  • provider.tf: Define the required modules to deploy/create the infrastructure.
  • security-groups.tf: Define the security group to be attached to the EC2 instances and allow access through 22 port(ssh).
  • kubernetes: Define the required namespaces to be created right after the cluster deployment
  • variables: The variable values to fill the module call.

Helm chart files

  • cronjob.yaml: Define the resource which will be created when running helm install
  • env-values.yaml: Define the values to be applied according to the environment of deployment, also has the requirement of running every five minutes.

App files

  • Dockerfile: The file used to build the docker image containing the code app.
  • requirements.txt: The required packages to run the application
  • src/run.py: The code itself which is responsible to create files inside a S3 bucket.

Provisioning

Follow the steps bellow to provision the infrastructure without hadaches:

  1. Inside the iac/ repository folder, execute:
terraform init
  1. Once the modules, backend and plugins were initialized, execute:
terraform plan
  1. Check if all the necessary resources are planned to be provisioned and then execute:
terraform apply -auto-approve
  1. Once all the infrastructure is deployed, execute the following command, in the repository root dir, to configure the EKS cluster context inside your machine:
aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name)

Execute kubectl get ns to see if qa and staging namespaces were created:

NAME              STATUS   AGE
default           Active   3h9m
kube-node-lease   Active   3h9m
kube-public       Active   3h9m
kube-system       Active   3h9m
qa                Active   10s
staging           Active   10s
  1. After validating it, you can go in two ways, use my builded docker image, which is already configured in qa-values.yaml and staging-values.yaml and available on docker.hub or you can build it in another public registry/repository:
docker build -t registry/repository/image:tag .
ex: docker build -t docker.io/atilarmao/s3-file-creation:0.0.1 .
  1. Push the docker image as follow:
docker push registry/repository/image:tag
  1. If you opted to go to your own image, update it on qa-values.yaml and staging-values.yaml in the fild image.
  2. Last but not least, deploy the app using helm for both environments, qa and staging:
qa environment
helm install s3-creation charts/ -f charts/environments/qa-values.yaml -n qa --set cronjob.spec.env.ACCESS_KEY=<AWS_ACCESS_KEY_ID> --set cronjob.spec.env.SECRET_KEY=<AWS_SECRET_ACCESS_KEY>

staging environment
helm install s3-creation charts/ -f charts/environments/qa-values.yaml -n qa --set cronjob.spec.env.ACCESS_KEY=<AWS_ACCESS_KEY_ID> --set cronjob.spec.env.SECRET_KEY=<AWS_SECRET_ACCESS_KEY>

After that, something like the snippet below must be displayed:

NAME: s3-creation
LAST DEPLOYED: Mon May 16 00:46:17 2022
NAMESPACE: staging
STATUS: deployed
REVISION: 1
TEST SUITE: None

Five minutes later, a POD execution must be created and a log like that must be displayed:

kubectl logs -n staging staging-s3-file-creation-1652673000-7pqvj

INFO:asyncio:file created with success: platform-challange-2022-05-16-03:50:07.txt

Destroying the provisioned infrastructure

  1. Uninstall the deployed app:
helm uninstall s3-creation -n qa
helm uninstall s3-creation -n staging
  1. Clear all the buckets:
aws s3 rm s3://<BUCKET_NAME> --recursive

Once the buckets are empty, execute the following command inside iac folder:

terraform destroy -auto-approve

Any questions?

Feel free to contact me via [email protected]

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published