Chatbot running on AWS Lambda that uses OpenAI GPT3.5 model and a demo chat window to query the bot
It uses GPT3.5 based Completion model to answer queries based on relevant data feed provided as context
Built on Vanilla - HTML
, JS
, CSS
Running the server locally using python HTTP server
$ npm start
-- OR --
$ python3 -m http.server
Built on Python 3.10
- Build docker image
- Push image to ECR
- Create AWS Lambda with exposed endpoint to query and use the image pused to ECR for its codebase
Testing the backend program locally via docker :
$ docker build -t myfunction:latest .
$ docker run -p 9000:8080 myfunction:latest
$ curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
- [fe] API Endpoint of newly created lambda in the front end
- [be] OpenAI keys
- [be] AWS Credentials and bucket name to use for storing the context data
- [be] scrapped-data.csv in the s3 bucket which will hold the scrapped data to be used as context for GPT models to analyse and answer on basis of (example csv file attached)
app.py
------
context_file_name : file name for the downloaded csv file locally (csv)
s3_bucket_name : s3 bucket holding the context data
s3_object_key_path : s3 object key path for the data file (csv)
processed_embedding_file_name : gpt processed embedding file to be stored locally
.env
-----
OPENAI_API_KEY : API key for the OpenAI platform
Dockerfile
----------
AWS_ACCESS_KEY_ID : AWS access key id for the account your s3 bucket is in
AWS_SECRET_ACCESS_KEY : AWS secret access key for the account your s3 bucket is in
index.html
----------
chatGptProcessingUrl : Lambda Endpoint to hit upon for querying