Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates to SAM templates and minor updates to Chroma Key function #118

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 18 additions & 30 deletions 1-app-deploy/ride-controller/template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,50 +2,46 @@ AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Innovator Island - Flow & Traffic Controller (ride queue times service).

Globals:
Function:
Handler: app.handler
Runtime: nodejs20.x
Architectures:
- arm64
Environment:
Variables:
DDBtable: !Ref DDBtable


Resources:

# Ride times are published to this topic
FATcontroller:
Type: AWS::SNS::Topic

# Table to hold state between Lambda invocations

DDBtable:
Type: AWS::DynamoDB::Table
Type: AWS::Serverless::SimpleTable
Properties:
AttributeDefinitions:
- AttributeName: ID
AttributeType: S
KeySchema:
- AttributeName: ID
KeyType: HASH
BillingMode: PROVISIONED
ProvisionedThroughput:
ReadCapacityUnits: 2
WriteCapacityUnits: 2
PrimaryKey:
Name: ID
Type: String

# Function to simulate ride times and closures
UpdateRides:
Type: AWS::Serverless::Function
Properties:
CodeUri: fatController/
Handler: app.handler
Runtime: nodejs20.x
Timeout: 10
Architectures:
- arm64
MemorySize: 128
Environment:
Variables:
DDBtable: !Ref DDBtable
TopicArn: !Ref FATcontroller
Policies:
- DynamoDBCrudPolicy:
TableName: !Ref DDBtable
- Statement:
- Effect: Allow
Resource: !Ref FATcontroller
Action:
- sns:Publish
- SNSPublishMessagePolicy:
TopicName: !GetAtt FATcontroller.TopicName
Events:
UpdateRidesEvent:
Type: Schedule
Expand All @@ -57,15 +53,7 @@ Resources:
Type: AWS::Serverless::Function
Properties:
CodeUri: initDB/
Handler: app.handler
Runtime: nodejs20.x
Architectures:
- arm64
Timeout: 15
MemorySize: 128
Environment:
Variables:
DDBtable: !Ref DDBtable
Policies:
- DynamoDBCrudPolicy:
TableName: !Ref DDBtable
Expand Down
19 changes: 10 additions & 9 deletions 1-app-deploy/sam-app/template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,14 @@ Description: Innovator Island - Theme park backend
Globals:
Function:
Timeout: 5
Runtime: nodejs20.x

Resources:
InitStateFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: getInitState/
Handler: app.lambdaHandler
MemorySize: 128
Runtime: nodejs20.x
Environment:
Variables:
DDB_TABLE_NAME: !Ref DynamoDBTable
Expand All @@ -34,8 +33,6 @@ Resources:
Properties:
CodeUri: getUploadURL/
Handler: app.handler
MemorySize: 128
Runtime: nodejs20.x
Environment:
Variables:
UploadBucket: !Ref UploadBucket
Expand Down Expand Up @@ -65,11 +62,7 @@ Resources:
- AttributeName: partitionKey
KeyType: HASH
- AttributeName: sortKey
KeyType: RANGE
BillingMode: PROVISIONED
ProvisionedThroughput:
ReadCapacityUnits: 2
WriteCapacityUnits: 2
KeyType: RANGE

##########################################
# S3 Buckets #
Expand Down Expand Up @@ -302,6 +295,14 @@ Resources:
Condition:
StringEquals:
AWS:SourceArn: !Sub arn:aws:cloudfront::${AWS::AccountId}:distribution/${CloudFrontDistribution}
- Sid: DenyInsecureTransport
Effect: Deny
Principal: '*'
Action: s3:*
Resource: !Sub arn:aws:s3:::${FinalBucket}/*
Condition:
Bool:
aws:SecureTransport: false

WebAppOriginAccessControl:
Type: AWS::CloudFront::OriginAccessControl
Expand Down
88 changes: 45 additions & 43 deletions 3-photos/1-chromakey/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,14 @@
import boto3
import botocore

s3 = boto3.client('s3')
s3 = boto3.client("s3")
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def upload_file(file_name, bucket, object_name=None):
"""Upload a file to an S3 bucket

def upload_file(file_name: str, bucket: str, object_name: str | None = None) -> bool:
"""
Upload a file to an S3 bucket
:param file_name: File to upload
:param bucket: Bucket to upload to
:param object_name: S3 object name. If not specified then same as file_name
Expand All @@ -26,93 +27,94 @@ def upload_file(file_name, bucket, object_name=None):
object_name = file_name

# Upload the file
s3_client = s3
try:
response = s3_client.upload_file(file_name, bucket, object_name)
s3.upload_file(file_name, bucket, object_name)
except botocore.exceptions.ClientError as e:
logging.error(e)
logger.error(f"Error uploading file {file_name} to bucket {bucket}: {e}")
return False
return True


def scale_image(image):
_image = image
target_height = 800

height, width, channels = _image.shape
logger.info('Original size: {}h x {}w'.format(height, width))
scale = height/target_height
height, width, _ = _image.shape
logger.info(f"Original size: {height}h x {width}w")
scale = height / target_height
if scale > 1:
_image = cv2.resize(image, (int(width/scale), int(height/scale)))
height, width, channels = image.shape
logger.info('New size: {}h x {}w'.format(int(height/scale), int(width/scale)))
_image = cv2.resize(image, (int(width / scale), int(height / scale)))
height, width, _ = image.shape
logger.info(f"New size: {int(height / scale)}h x {int(width / scale)}w")
return _image

def lambda_handler(event, context):

print ("Starting handler")
def lambda_handler(event, context) -> None:

logger.info("Starting handler")

# get object metadata from event
input_bucket_name = event['Records'][0]['s3']['bucket']['name']
file_key = event['Records'][0]['s3']['object']['key']
output_bucket_name = os.environ['OUTPUT_BUCKET_NAME']
output_file_key = file_key.replace('.jpg', '.png')
print("Input bucket: ", input_bucket_name)
print("Output bucket: ", output_bucket_name)

input_bucket_name = event["Records"][0]["s3"]["bucket"]["name"]
file_key = event["Records"][0]["s3"]["object"]["key"]
output_bucket_name = os.environ["OUTPUT_BUCKET_NAME"]
output_file_key = file_key.replace(".jpg", ".png")
logger.info(
f"Input bucket: {input_bucket_name}",
)
logger.info(f"Output bucket: {output_bucket_name} ")

if output_bucket_name is None:
print("Error: No OUTPUT_BUCKET_NAME environment variable specified.")
logger.error("Error: No OUTPUT_BUCKET_NAME environment variable specified.")
return

# set up local temp file names
local_input_temp_file = '/tmp/' + file_key
local_output_temp_file = '/tmp/out_' + file_key.replace('.jpg', '.png')
logger.info('Local input file: {}'.format(local_input_temp_file))
logger.info('Local output file: {}'.format(local_output_temp_file))
local_input_temp_file = "/tmp/" + file_key
local_output_temp_file = "/tmp/out_" + file_key.replace(".jpg", ".png")
logger.info(f"Local input file: {local_input_temp_file}")
logger.info(f"Local output file: {local_output_temp_file}")

# get the object
s3.download_file(input_bucket_name, file_key, local_input_temp_file)

# HSV range

# (36, 25, 25) - most extreme
# (36, 50, 50) - average
# (36, 100, 100) - relaxed
lower_range = tuple(json.loads(os.environ["HSV_LOWER"]))

# (70, 255, 255) - default
upper_range = tuple(json.loads(os.environ["HSV_UPPER"]))
print('Lower HSV range: ', lower_range)
print('Upper HSV range: ', upper_range)
logger.info(f"Lower HSV range: {lower_range}")
logger.info(f"Upper HSV range: {upper_range}")

# Read in the file
image = cv2.imread(local_input_temp_file)

# Resize the image if larger than target size
image = scale_image(image)

# Flip from RGB of JPEG to BGR of OpenCV
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Convert BGR to HSV color space
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)

# convert to RGBA
image_alpha = cv2.cvtColor(image, cv2.COLOR_BGR2RGBA)

# Threshold the HSV image to only green colors
mask = cv2.inRange(hsv, lower_range, upper_range)

# Invert the mask (i.e. select everything not green)
mask = ~mask

# Extract the non-green parts of the image
result = cv2.bitwise_and(image_alpha, image_alpha, mask=mask)
#Save the result
cv2.imwrite(local_output_temp_file,result)
#Save to S3

# Save the result
cv2.imwrite(local_output_temp_file, result)

# Save to S3
if upload_file(local_output_temp_file, output_bucket_name, output_file_key):
print('Processed file uploaded.')

return True
logger.info("Processed file uploaded.")
4 changes: 1 addition & 3 deletions 3-photos/2-compositing/template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,14 @@ Parameters:
Type: String
Description: Name of the final S3 bucket

Globals:
Function:
Timeout: 10

Resources:
CompositeFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: lambdaFunction/
Handler: app.handler
Timeout: 10
Runtime: nodejs20.x
Architectures:
- arm64
Expand Down
4 changes: 1 addition & 3 deletions 5-park-stats/2-simulator/sam-app/template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,6 @@ Parameters:
StreamArn:
Type: String

Globals:
Function:
Timeout: 900

Resources:
SimulatorFunction:
Expand All @@ -20,6 +17,7 @@ Resources:
CodeUri: simulatorFunction/
Handler: app.handler
Runtime: nodejs20.x
Timeout: 900
Architectures:
- arm64
MemorySize: 3008
Expand Down
11 changes: 3 additions & 8 deletions 6-eventbridge/1-eventbus/sam-app/template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,17 +5,16 @@ Description: Innovator Island - 6 Event-based architecture - Part 1.
Globals:
Function:
Timeout: 5
Runtime: nodejs20.x
Architectures:
- arm64

Resources:
PublishFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: publishFunction/
Handler: app.handler
Runtime: nodejs20.x
Architectures:
- arm64
MemorySize: 128
Policies:
- Statement:
- Effect: Allow
Expand All @@ -28,10 +27,6 @@ Resources:
Properties:
CodeUri: metricsFunction/
Handler: app.handler
Runtime: nodejs20.x
Architectures:
- arm64
MemorySize: 128
Policies:
- Statement:
- Effect: Allow
Expand Down