This example shows Amazon API Gateway and Amazon Simple Queue Service (Amazon SQS) integration capabilities. However, rather than just focusing on technical aspects, we have used a common use case of triggering long-running batch processes with very minimal input.
In this use case, any client applications will call an API using Amazon API Gateway to submit a new job request. We buffer these job requests in Amazon SQS queue. AWS Lambda will process these requests by triggering business logic and then use Amazon DynamoDB to keep track of each request. Also, client applications can get the status of each submitted job request using a Job Status API endpoint.
In this example, we use AWS Lambda function to simulate business logic. It generates random weather data payload and stores it in an Amazon Simple Storage Service (Amazon S3) bucket.
This business logic simulation can easily be replaced with real-world business logic using AWS services like AWS Batch, AWS Step Functions, Amazon Elastic Container Service (Amazon ECS), AWS Fargate, etc.
We created a reference application to showcase the implementation of this use case using a serverless approach. It includes CI/CD pipelines, automated unit and integration testing, and workload observability.
This example includes multiple implementations of the same application using a variety of development platforms and Infrastructure as Code approaches.
The flow of the process in this implementation is:
-
Amazon API Gateway Endpoint (/submit-job) is integrated with Amazon SQS Queue. Client applications will use this endpoint to submit a new job request.
-
Once requests are submitted via API call, these requests will be stored in an Amazon SQS Queue to avoid message loss.
-
AWS Lambda function will process each job request from Queue, using the following actions.
- Add a tracking entry into Amazon DynamoDB table.
- Trigger Batch process, in this example, invoke another Batch Simulator AWS Lambda Function.
- Once the batch process finishes, update the status of job requests in the Amazon DynamoDB table.
-
AWS Lambda function that simulates business logic for the batch process by randomly generating weather data and storing the payload into Amazon S3 bucket.
-
Client applications can get the status of a specific job request via Amazon API Gateway Endpoint( /job-status/{job-id} ) with a specific Job Id.
This endpoint provides the latest status of Job request, the status of job checked in Amazon DynamoDB Table and for job withCompleted
status it provides a pre-signed URL to the job output payload i.e. randomly generated weather data.
These examples create the following resources in your AWS account:
Below are the key tools needed for deploying this application.
- Python 3.x
- AWS Serverless Application Model (SAM) CLI
- AWS CLI
- [AWS Cloud Development Kit (CDK)] (https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html)
- Docker: You can refer to this installation guide
This project is set up like a standard Python project. You may need to manually create a virtualenv.
Expand for detail on how to do this.
python3 -m venv .venv
After the init process completes and the virtualenv is created, you can use the following: step to activate your virtualenv.
source .venv/bin/activate
python3 -m pip install --upgrade pip
pip install -U wheel setuptools
Install Docker. You can refer to this installation guide
Deploy using:
Each example implements logging using Amazon CloudWatch Logs, emits custom metrics using Embedded Metrics Format, configures Amazon CloudWatch alerts, and creates an Amazon CloudWatch dashboard. AWS X-Ray distributed tracing is enabled whenever it is supported. AWS Lambda functions bundle the AWS X-Ray SDK for additional instrumentation. Amazon API Gateway access logging is enabled with a 30 day retention period.
Check the AWS CloudFormation outputs of your deployment to see the Amazon CloudWatch dashboard URL, references to the Amazon API Gateway access logs stream, and alarm topics in Amazon SNS.