This is an Ariflow verion of image processing workflow in workshop
- Amazon Rekognition for image processing
- AWS Lambda for creating Thumbnails
- Amazon S3 for storing/retrieving images
- Amazon DynamoDB for storing the metadata
- Verify the photo shows a clear face.
- Match against the collection of previously indexed faces.
- Resize the photo to thumbnails to display on the app.
- Index the user’s face into the collection so it can be used for matching in the future.
- Store the photo metadata with the user’s profile.
aws rekognition create-collection --collection-id image_processing
aws s3api create-bucket --bucket {bucket_name} --region {region}
aws s3api put-bucket-versioning --bucket {bucket_name} --versioning-configuration Status=Enabled
aws s3api put-public-access-block --bucket {bucket_name} --public-access-block-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
aws s3api put-object --bucket {bucket_name} --key requirements.txt --body dags/2.0/requirements.txt --output text
Note down the version number from the last command. This will be used during next step.
sam build
sam deploy --stack-name MWAA-image-processing -g
Replace TABLE_NAME with Stack Output.DynamoDBTableName and LAMBDA_FN_NAME with Stack Output.LambdaFunctionName in dags/image-processing.py. Copy the dag and images(to be tested) to the S3 Bucket created in Step 2
aws s3 cp dags/2.0/image_processing.py s3://{bucket_name}/dags/image-processing.py
aws s3 cp images s3://{bucket_name}/images --recursive
-
Access Airflow UI. The webserver URL will be in the output of the cloudformation template
-
Trigger the Dag using the JSON given below
{
"s3Bucket":"{bucket_name}",
"s3Key":"images/1_happy_face.jpg",
"RekognitionCollectionId":"image_processing",
"userId": "userId"
}
aws rekognition list-faces --collection-id image_processing
aws rekognition delete-faces \
--collection-id image_processing \
--face-ids REPLACE_WITH_FACE_ID \