This document contains the technical specifications for Feedback Sentiment Analyzer (FSA), a sample application that showcases AWS services and SDKs.
This document explains the following:
- Application inputs and outputs
- Underlying AWS components and their configurations
- Key cross-service integration details
For an introduction to FSA, see the README.md.
- Relational diagram
- User actions
- Application behavior
- HTTP API specification
- Step Functions configuration
- GetFeedback Lambda function
- Processing Amazon S3 events with EventBridge
- Managing items in DynamoDB
- Other FSA material
This diagram represents the relationships between key FSA components.
- The static website assets are hosted in an Amazon S3 bucket and served using Amazon CloudFront.
- Amazon Cognito allows authenticated access to Amazon API Gateway.
- Amazon API Gateway puts objects in an Amazon S3 bucket. This triggers an EventBridge rule that starts a Step Functions workflow.
- The Step Functions workflow uses AWS Lambda, Amazon Textract, Amazon Comprehend, Amazon Translate, and Amazon Polly to perform the business logic.
- Metadata is stored in Amazon DynamoDB. Audio files are stored in the same Amazon S3 bucket used in step 3.
- Amazon API Gateway fetches the metadata from Amazon DynamoDB.
This application receives three inputs from the frontend.
- Authenticate user (uses
Cognito
) - Load page (uses
DynamoDB:GetItems
) - Upload image to Amazon S3 (
S3:PutObject
)
This application produces two outputs from the backend:
- Put new item in the database (uses
DynamoDB:PutItem
) - Put new synthesized audio to Amazon S3 (uses
S3:PutObject
)
All the APIs are created by the CDK script. The endpoints are common to every language variation and do not need any additional implementation.
PUT /api/media/{item}
Create or update an object in Amazon S3. Creating or updating an image file will trigger the Step Functions workflow.
parameters | request body | response body |
---|---|---|
item - the object key of the item to get | JavaScript File object | Empty |
GET /api/media/{item}
Get an object from Amazon S3.
parameters | request body | response body |
---|---|---|
item - the object key of the item to get | Empty | Empty |
GET /api/feedback
Get the translated text, sentiment, and audio/image keys for an uploaded image. This data comes from Amazon DynamoDB. The database table is filled as a result of running the step function.
parameters | request body | response body |
---|---|---|
Empty | Empty |
{ "feedback": [ { "sentiment": "POSITIVE", "text": "I love this hotel!", "audioUrl": "PXL_20230710_182358532.jpg.mp3", "imageUrl": "PXL_20230710_182358532.jpg" } ] } |
GET /api/env
Get the environment variables required to connect to an Amazon Cognito hosted UI. The frontend calls this automatically to facilitate sign in.
parameters | request body | response body |
---|---|---|
Empty | Empty |
{ "COGNITO_SIGN_IN_URL": "https://...", "COGNITO_SIGN_OUT_URL": "https://..." } |
When an image is created or updated in an S3 media bucket, a Step Functions state machine is triggered.
The sequence of this multi-state workflow follows:
- Start
- ExtractText - Extracts text from an image
- AnalyzeSentiment - Detects text sentiment
ContinueIfPositive
(Skip to 7 if sentiment isNEGATIVE
)- TranslateText - Translates text to English
- SynthesizeAudio - Synthesizes human-like audio from text
DynamoDB:PutItem
(See table config)- Stop
The following diagram depicts this sequence.
Following are the required inputs and outputs of each Lambda function.
Use the Amazon Textract DetectDocumentText method to extract text from an image and return a unified text representation.
Use the data available on the Amazon S3 event object.
For example:
{
"bucket": "amzn-s3-demo-bucket",
"region": "us-east-1",
"object": "obj/ect.png"
}
Returns a string representing the extracted text.
For example:
CET HÔTEL ÉTAIT SUPER
Use the Amazon Comprehend DetectSentiment
method to detect sentiment (POSITIVE
, NEUTRAL
, MIXED
, or NEGATIVE
).
Use the data available on the Lambda event object.
For example:
{
"source_text": "CET HÔTEL ÉTAIT SUPER",
"region": "us-east-1"
}
Returns the determined sentiment and language code.
For example:
{
"sentiment": "POSITIVE",
"language_code": "fr-FR"
}
Use the Amazon Translate TranslateText method to translate text to English and return a string.
Uses the data available on the Lambda event object.
For example:
{
"source_language_code": "fr-FR",
"region": "us-east-1",
"extracted_text": "CET HÔTEL ÉTAIT SUPER"
}
Returns an object containing the translated text.
For example:
{ translated_text: "THIS HOTEL WAS GREAT" }
Uses the Amazon Polly SynthesizeSpeech method to convert input text into life-like speech. Store the synthesized audio in the provided Amazon S3 bucket with a content type of "audio/mpeg".
Uses the data available on the Lambda event object.
For example:
{
"bucket": "amzn-s3-demo-bucket",
"translated_text": "THIS HOTEL WAS GREAT",
"region": "us-east-1",
"object": "comment.png"
}
Return a string representing the key of the synthesized audio file. The key is the provided object name appended with ".mp3". This key will be sent to the frontend. The frontend will use the key to directly get the audio file from Amazon S3.
For example, if the object name was "image.jpg", the output would be "image.jpg.mp3".
Uses the DynamoDB GetItem method to get all records from the table. Invoked by the frontend interface.
There is no input.
Returns a JSON object with one property: feedback
. feedback
is an array of objects that contains four properties:
- sentiment - "POSITIVE" | "NEGATIVE" | "NEUTRAL"
- text - The original text translated to the destination language. (English by default)
- audioUrl - The Amazon S3 object key for the synthesized audio file.
- imageUrl - The original uploaded image.
For example:
{
"feedback": [
{
"sentiment": "POSITIVE",
"text": "THIS HOTEL WAS GREAT",
"audioUrl": "PXL_20230710_182358532.jpg.mp3",
"imageUrl": "PXL_20230710_182358532.jpg"
}
]
}
This application relies on an EventBridge rule to trigger the Step Functions state machine when new images are uploaded to Amazon S3 by the frontend.
Specifically, the trigger is scoped to ObjectCreated
events emitted by my-s3-bucket
:
{
"source": ["aws.s3"],
"detailType": ["Object Created"],
"detail": {
"bucket": {
"name": ["<dynamic media bucket name>"]
},
"object": {
"key": [{"suffix": ".png"}, {"suffix": ".jpeg"}, {"suffix": ".jpg"}]
}
}
}
This application relies on an Amazon DynamoDB table using the following schema.
key | purpose | attribute | value |
---|---|---|---|
comment_key |
Key of the scanned image. | S |
S3 object key |
source_text |
Extracted text from image. | S |
Extracted text |
sentiment |
Amazon Comprehend sentiment score. | S |
Amazon Comprehend JSON object |
source_language |
The language detected from the text. | S |
Language code |
translated_text |
English version of 'text'. | S |
Translated text |
audio_key |
Key of the audio file generated by Amazon Polly. | S |
Amazon S3 object key |
If technical details are not what you seek, try these instead: