Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Small updates to the Ragas ML backend readme #587

Open
wants to merge 3 commits into
base: agi-builders-workshop-rag
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 13 additions & 13 deletions label_studio_ml/examples/rag_quickstart/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
title: Question answering with RAG using Label Studio
type: guide
tier: all
order: 5
order: 5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can remove the trailing space

hide_menu: true
hide_frontmatter_title: true
meta_title: RAG labeling with OpenAI using Label Studio
Expand All @@ -12,27 +12,26 @@ categories:
- Generative AI
- Large Language Model
- OpenAI
- Azure
- Ollama
- ChatGPT
- RAG
- LangChain
- Ragas
- Embeddings
image: "/tutorials/llm-interactive.png"
image: "/tutorials/ragas.png"
---
-->

# RAG Quickstart Labeling

This example server connects Label Studio to [OpenAI](https://platform.openai.com/), to interact with chat and embedding models. It supports question answering and evaluation using RAG, given a list of questions as tasks, and a folder containing documentation (eg, a `/docs` path within a Github repository that has been cloned on your computer.)
This example server connects Label Studio to [OpenAI](https://platform.openai.com/), to interact with chat and embedding models. It supports question answering and evaluation using RAG, given a list of questions as tasks, and a folder containing documentation (e.g. a `/docs` path within a Github repository that has been cloned on your computer.)

## Starting the ML Backend

1. Make your reference documentation available to the backend
1. Make your reference documentation available to the backend.

Create a `docker-compose.override.yml` file alongside `docker-compose.yml`, and use it to mount a folder containing your documentation into the filesystem of the ML Backend's image. This example will mount the folder at `/host/path/to/your/documentation` on your computer, to the path /data/documentation inside the ML Backend docker image. The `DOCUMENTATION_PATH` and `DOCUMENTATION_GLOB` settings given below will match all `.md` files within `/data/documentation` (or its subfolders).
Create a `docker-compose.override.yml` file alongside `docker-compose.yml`, and use it to mount a folder containing your documentation into the filesystem of the ML backend's image. This example will mount the folder at `/host/path/to/your/documentation` on your computer, to the path /data/documentation inside the ML backend Docker image. The `DOCUMENTATION_PATH` and `DOCUMENTATION_GLOB` settings given below will match all `.md` files within `/data/documentation` (or its subfolders).

```
services:
rag_quickstart:
volumes:
- /host/path/to/your/documentation:/data/documentation
Expand All @@ -58,8 +57,7 @@ $ curl http://localhost:9090/health
Ensure the **Interactive preannotations** toggle is enabled and click **Validate and Save**.
5. Use the label config below. The config and backend can be customized to fit your needs.
6. Open a task and ensure the **Auto-Annotation** toggle is enabled (it is located at the bottom of the labeling interface).
7. Enter a prompt in the prompt input field and press `Shift+Enter`. The LLM response will be generated and displayed in
the response field.
7. The text fields should be auto-completed by the LLM. However, you can provide additional instructions in the empty text area field. To submit, press `Shift+Enter`.
8. If you want to apply LLM auto-annotation to multiple tasks at once, go to the [Data Manager](https://labelstud.io/guide/manage_data), select a group of tasks and then select **Actions > Retrieve Predictions** (or **Batch Predictions** in Label Studio Enterprise).

## Label Config
Expand Down Expand Up @@ -116,15 +114,15 @@ $ curl http://localhost:9090/health
/>
<View className="ragas" >
<View style="display: flex;">
<Header style="padding-right: 1em;" value="RAGAS evaluation (averaged, 0 to 100):"/><Number name="float_eval" toName="context" defaultValue="0" />
<Header style="padding-right: 1em;" value="Ragas evaluation (averaged, 0 to 100):"/><Number name="float_eval" toName="context" defaultValue="0" />
</View>
<TextArea name="ragas"
toName="context"
rows="2"
maxSubmissions="1"
showSubmitButton="false"
smart="false"
placeholder="RAGAS evaluation will appear here..."
placeholder="Ragas evaluation will appear here..."
/>
</View>
<View className="evaluation" >
Expand Down Expand Up @@ -154,6 +152,8 @@ $ curl http://localhost:9090/health
</View>
```

For more information on this labeling config, see the [Evaluate RAG with Ragas](https://labelstud.io/templates/llm_ragas) template documentation.

**Example data input:**

Tip: when generating questions for your project, it may be helpful to pass this snippet to ChatGPT etc to give it an example of Label Studio's tasks format to work from.
Expand All @@ -173,4 +173,4 @@ Tip: when generating questions for your project, it may be helpful to pass this
}
}
]
```
```