Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Protecting sensitive-data in RAG applications on Amazon Bedrock. 2 Sc… #492

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
196 changes: 196 additions & 0 deletions security/securing-rag-apps/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,196 @@
# Created by https://www.toptal.com/developers/gitignore/api/python,jupyternotebooks
# Edit at https://www.toptal.com/developers/gitignore?templates=python,jupyternotebooks

### JupyterNotebooks ###
# gitignore template for Jupyter Notebooks
# website: http://jupyter.org/

.ipynb_checkpoints
*/.ipynb_checkpoints/*
.virtual_documents
*/.virtual_documents/*

# IPython
profile_default/
ipython_config.py

# Remove previous ipynb_checkpoints
# git rm -r .ipynb_checkpoints/

### Python ###
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook

# IPython

# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock

# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/#use-with-ide
.pdm.toml

# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/

# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/

### Python Patch ###
# Poetry local configuration file - https://python-poetry.org/docs/configuration/#local-configuration
poetry.toml

# ruff
.ruff_cache/

# LSP config files
pyrightconfig.json

# End of https://www.toptal.com/developers/gitignore/api/python,jupyternotebooks
.DS_Store
data/*
# CDK Specific
cdk.out
**/response.json
drawio/.*.bkp
**/.streamlit/
53 changes: 53 additions & 0 deletions security/securing-rag-apps/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# Protecting sensitive data in RAG-based applications with Amazon Bedrock

This blog post shows two architecture patterns for protecting sensitive data in RAG-based applications using Amazon Bedrock.

In the **first scenario (Scenario 1)**, we'll show how users can redact or mask sensitive data before storing it in a vector store (a.k.a Ingestion) or Amazon Bedrock Knowledge Base. This zero-trust approach reduces the risk of sensitive information being inadvertently disclosed to unauthorized parties.

The **second scenario (Scenario 2)** will show on situations where sensitive data needs to be stored in the vector store, such as in healthcare settings with distinct user roles like administrators (doctors) and non-administrators (nurses or support personnel). Here, we'll show how a role-based access control pattern enables selective access to sensitive information based on user roles and permissions during retrieval.

Both scenarios come with an [AWS Cloud Development Kit (CDK)](https://aws.amazon.com/cdk/) and an accompanying [streamlit](https://streamlit.io/) app to test each scenario.

## Pre-requisites

Python version >= 3.10.16

Create and activate venv

```shell
python -m venv .venv
source .venv/bin/activate
```

upgrade pip and install `requirements.txt`

```shell
pip install -U pip
pip install -r requirements.txt
```

### Amazon Bedrock Model Access

Ensure you have access to Anthropic Claude models in Amazon Bedrock. Refer to [getting started](https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.html) guide for more info.

## Synthetic Data Generation Tool

For testing each scenario with sensitive data, we use [`synthetic_data.py`](./synthetic_data.py) data generation script.\
The script generates synthetic healthcare and financial data for testing purposes. \
The data generated is completely fictional and does not contain any real Personal Identifiable Information (PII).

Run [`synthetic_data.py`](./synthetic_data.py) script to generate sample data for the demo.

```shell
python synthetic_data.py --seed 123 generate -n 10
```

Data files will be available under a new `data/` directory.

## Scenario 1 (Data identification and redaction before Ingestion to KnowledgeBase)

Refer to [Scenario 1 README.md](./scenario_1/README.md)

## Scenario 2 (Role-Based access to PII data during retrieval)

Refer to [Scenario 2 README.md](./scenario_2/README.md)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
11 changes: 11 additions & 0 deletions security/securing-rag-apps/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
aws-cdk-lib==2.177.0
constructs>=10.0.0,<11.0.0
cdklabs.generative-ai-cdk-constructs==0.1.290
streamlit==1.41.1
watchdog==6.0.0
jwt==1.3.1
loguru==0.7.3
boto3==1.36.6
faker==33.1.0
click==8.1.7
rich==13.9.4
2 changes: 2 additions & 0 deletions security/securing-rag-apps/scenario_1/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
*/cdk.out/
response.json
113 changes: 113 additions & 0 deletions security/securing-rag-apps/scenario_1/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
# Scenario 1 - Data identification and redaction before Ingestion to KnowledgeBase

![Scenario 1 - Ingestion Flow](../images/scenario1_ingestion_flow.png)

In Scenario 1, documents flow through a series of carefully orchestrated steps:

1. Initial Document Upload (Step 1):
- Users upload source documents containing sensitive data to an S3 bucket's "_inputs/_" folder
- This triggers an automated data identification and redaction pipeline
2. Comprehend PII Redaction Process (Step 2):
- ComprehendLambda, triggered by an EventBridge rule every 5 minutes:
- Scans for new files in the "_inputs/_" folder
- Moves detected files to a "processing/" folder
- Initiates an async _Comprehend PII redaction analysis job_
- Records the job ID and status in a DynamoDB JobTracking table
- Comprehend automatically redacts sensitive elements like:
- Names, addresses, phone numbers
- Social security numbers, driver's license IDs
- Banking information and credit card details
- Comprehend replaces identified PlI entities with placeholder tokens (e.g., [NAME], [SSN])
- Once complete, redacted files move to "for_macie_scan/" folder
3. Secondary Verification with Amazon Macie - Sensitive Data Detection (Step 3):
- MacieLambda monitors Comprehend job completion
- Upon successful completion, triggers a Macie one-time sensitive data detection job
- Macie scans all files in the "for_macie_scan/" folder
- Based on Macie findings:
- Files with severity >= 3 (HIGH) move to "quarantine/" folder for human review
- Files with severity < 3 (LOW) transfer to a "safe" bucket
4. Amazon Bedrock Knowledge Base Integration (Step 4):
- Files in the "safe" bucket trigger an Amazon Bedrock knowledge base data ingestion job
- Documents are securely indexed in the vector store
- Ready for use in RAG applications

## Augmented Retrieval Flow

![Augmented Retrieval Flow](../images/scenario1_augmented%20retrieval_flow.png)

## Usage

### Step 1: Deploying CDK stack

#### Prerequisites

- Ensure python virtual environment and `requirements.txt` as described in [Pre-requisites](../README.md#pre-requisites) section.

- Ensure Amazon Macie is enabled. Refer to [getting-started](https://docs.aws.amazon.com/macie/latest/user/getting-started.html) guide for more info.
- Ensure Access to launch Amazon Comprehend analysis jobs for PII redaction. Refer to [getting-started](https://docs.aws.amazon.com/comprehend/latest/dg/getting-started.html) guide.
- Install Docker desktop for custom CDK constructs.
- [Install Docker desktop for windows](https://docs.docker.com/desktop/setup/install/windows-install/)
- [Install Docker desktop for Mac](https://docs.docker.com/desktop/setup/install/mac-install/)
- [Install Docker desktop for Linux](https://docs.docker.com/desktop/setup/install/linux/)

set environment variables and run bootsrap. Replace `ACCOUNT_ID` with your AWS ACCOUNT_ID.

```shell
cd scenario_1/cdk

export CDK_DEFAULT_ACCOUNT=ACCOUNT_ID && \
export CDK_DEFAULT_REGION=us-west-2 && \
export JSII_SILENCE_WARNING_UNTESTED_NODE_VERSION=1
```

At this point you can now synthesize the CloudFormation template for this code.

>**NOTE:** Before running the below command ensure docker desktop is running.

```shell
cdk bootstrap && cdk synth && cdk deploy
```

wait for the deployment to complete.

### Step 2

>**NOTE:** Ensure [`synthetic_data.py`](./synthetic_data.py) is run before this step. Refer to [Synthetic Data Generation Tool](../README.md#synthetic-data-generation-tool) section for info.

Execute the `run_app.sh` script from `scenario_1/` directory as root.
This script automatically uploads test data to s3 and monitors both Comprehend PII Redaction and Macie sensitive-data jobs to completion.

```shell
cd ..
chmod +x run_app.sh
./run_app.sh
```

Wait for the script to complete. After the script completes it should automatically launch the streamlit app at <http://localhost:8501/>

- Login using `[email protected]` with reset password earlier.
- From the sidebar, select a model from the drop-down.
- Optionally, set model params like `temperature` and `top_p` values.
- Ask questions based on your data files in [data](../data/) folder.

Here are a few sample questions:

- What medications were recommended for _Chronic migraines_
- Typically what are recommended medications for _shortness of breath_
- List all patients with _Obesity_ as Symptom and the recommended medications
- What is the home address of _Nikhil Jayashankar_
- List all patients under _Institution Flores Group Medical Center_

>**NOTE:** The above questions are just for reference your datafiles may or may not contain information on the questions. Check your datafiles in [data](../data/) folder.

## Cleanup (Scenario 1)

Delete the stack.

>**NOTE:** The below command deletes all deployed resources including S3 buckets.

```shell
pwd
scenario_1/cdk
cdk destroy
```
9 changes: 9 additions & 0 deletions security/securing-rag-apps/scenario_1/cdk/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
#!/usr/bin/env python3

import aws_cdk as cdk

from cdk_pii_scenario1.cdk_pii_scenario1_stack import PiiRedactionStack

app = cdk.App()
PiiRedactionStack(app, "CdkPiiScenario1Stack")
app.synth()
Loading