Skip to content

Latest commit

 

History

History
83 lines (48 loc) · 5.26 KB

File metadata and controls

83 lines (48 loc) · 5.26 KB

GPT4 assesment & Core Functionality Overview

The function performs the following steps:

  1. Receives messages from an SQS queue.
  2. Processes each message, which is expected to be a DataFrame of text messages.
  3. Stores the DataFrame in an ElastiCache instance.
  4. Retrieves all messages from the last 5 minutes from the ElastiCache.
  5. Calculates the cosine similarity for the last 5 minutes worth of messages.
  6. If the cosine similarity is above a certain threshold, it calculates the ROUGE-L scores for the messages.
  7. If the average ROUGE-L score is below a certain threshold, it triggers an alert function which sends the DataFrame to a second SQS queue.

Alignment with Intent:

The function seems to align well with the stated intent.

It efficiently spins up, stores, and retrieves messages from a Redis cache.

It uses cosine similarity to detect similar inputs within the incoming dataframes to the last 5 minutes worth of messages.

If a breach of cosine similarity is detected, it calculates ROUGE-L scores for the new dataframes.

If the inputs are found to be similar and drifting from the baseline, a message is posted to a second SQS queue for further analysis.

Improvements / Considerations:

  1. The function seems to assume that the messages in the DataFrame are already in a format suitable for cosine similarity and ROUGE-L calculations. If the messages require any preprocessing, this would need to be added.

Note: Intended as a downstream component of a DLP style pipeline that processes all inputs/outputs to and from the LLM system such that sensitive data is removed, and the DataFrame is properly formatted; DLP Pipeline also publishes to instantiating SQS queue

  1. The function calculates the cosine similarity for all messages in the last 5 minutes, not just the new messages. This could be inefficient if the volume of messages is high.

Note: Needs to be adjusted according to the volume of the specific application; SLAs should be specific

  1. The function calculates the ROUGE-L scores for the new messages against the entire baseline DataFrame. If the baseline DataFrame is large, this could be inefficient.

Note: Could be adjusted to a subset, or a windowing function like the cosine similarity function; intent it to identify poor generalization or abuse of fine-tuned models

  1. The function does not seem to spin down any resources after completion. If this is a requirement, it would need to be added.

Note: The ElastiCache instance would be provisioned as part of another pipeline, each made "hot" by the SQS queue; persistence and spinning down as part of a TBD warm cache layer

Error Handling:

The function does not seem to handle the case where the Redis cache is not available or fails during operation. This could be improved with additional error handling.

The function does not seem to handle the case where the SQS queue is not available or fails during operation. This could be improved with additional error handling.

The function does not seem to handle the case where the S3 bucket is not available or fails during operation. This could be improved with additional error handling.

Note: Yep; gotta do that

AWS Lambda Best Practices Adherence

Separate the Lambda handler from your core logic: The provided function adheres to this practice. The lambda_handler function is separate from the core logic which is encapsulated in the process_data function.

Take advantage of execution environment reuse: The function initializes the boto3 and Redis clients outside the function handler, which is a good practice.

Avoid potential data leaks across invocations: The function does not appear to store any user data or sensitive information in the execution environment.

Use environment variables to pass operational parameters: The function uses constants for operational parameters, which could be replaced with environment variables for better flexibility and security.

Control the dependencies in your function's deployment package: The function imports several libraries. It's important to ensure that these libraries are packaged with the deployment package.

Minimize your deployment package size to its runtime necessities: Without knowing the exact deployment package, it's hard to evaluate this. However, the function seems to only import necessary libraries.

Minimize the complexity of your dependencies: The function uses standard libraries like pandas, sklearn, boto3, and redis, which are commonly used and well-maintained.

Write idempotent code: The function seems to handle duplicate events gracefully. However, without knowing the exact nature of the data, it's hard to fully evaluate this.

Performance testing your Lambda function: Without performance testing, it's hard to evaluate this. However, the function does include error handling and retry logic.

Use most-restrictive permissions when setting IAM policies: Without knowing the exact IAM policies, it's hard to evaluate this. However, the function seems to only access necessary AWS services.

Be familiar with Lambda quotas: The function includes a MAX_PAYLOAD_SIZE constant, which suggests awareness of Lambda quotas.

Delete Lambda functions that you are no longer using: This is more of an operational practice than something that can be evaluated in the code.