diff --git a/Makefile b/Makefile index 0e9b7d4fe7..06544b5c4c 100644 --- a/Makefile +++ b/Makefile @@ -39,7 +39,7 @@ run-ci: format lint type test ## Running all CI checks build-docsite: ## Use GPT4 to rewrite the documentation @echo "convert ipynb notebooks to md files" $(Q)python $(GIT_ROOT)/docs/ipynb_to_md.py - @(Q)mkdocs build + $(Q)mkdocs build serve-docsite: ## Build and serve documentation $(Q)mkdocs serve --dirtyreload diff --git a/docs/howtos/integrations/_langchain.md b/docs/howtos/integrations/_langchain.md deleted file mode 100644 index fa0bd7aa3e..0000000000 --- a/docs/howtos/integrations/_langchain.md +++ /dev/null @@ -1,297 +0,0 @@ -# Langchain -## Evaluating Langchain QA Chains - -LangChain is a framework for developing applications powered by language models. It can also be used to create RAG systems (or QA systems as they are reffered to in langchain). If you want to know more about creating RAG systems with langchain you can check the [docs](https://python.langchain.com/docs/use_cases/question_answering/). - -With this integration you can easily evaluate your QA chains with the metrics offered in ragas - - -```python -#!pip install ragas langchain_openai python-dotenv -``` - - -```python -# attach to the existing event loop when using jupyter notebooks -import nest_asyncio -import os -import openai -from dotenv import load_dotenv - -# Load environment variables from .env file -load_dotenv() -# IMPORTANT: Remember to create a .env variable containing: OPENAI_API_KEY=sk-xyz where xyz is your key - -# Access the API key from the environment variable -api_key = os.environ.get("OPENAI_API_KEY") - -# Initialize the OpenAI API client -openai.api_key = api_key - -nest_asyncio.apply() -``` - -First lets load the dataset. We are going to build a generic QA system over the [NYC wikipedia page](https://en.wikipedia.org/wiki/New_York_City). Load the dataset and create the `VectorstoreIndex` and the `RetrievalQA` from it. - - -```python -from langchain_community.document_loaders import TextLoader -from langchain.indexes import VectorstoreIndexCreator -from langchain.chains import RetrievalQA -from langchain_openai import ChatOpenAI - -loader = TextLoader("./nyc_wikipedia/nyc_text.txt") -index = VectorstoreIndexCreator().from_loaders([loader]) - - -llm = ChatOpenAI(temperature=0) -qa_chain = RetrievalQA.from_chain_type( - llm, - retriever=index.vectorstore.as_retriever(), - return_source_documents=True, -) -``` - - -```python -# testing it out - -question = "How did New York City get its name?" -result = qa_chain({"query": question}) -result["result"] -``` - -Now in order to evaluate the qa system we generated a few relevant questions. We've generated a few question for you but feel free to add any you want. - - -```python -eval_questions = [ - "What is the population of New York City as of 2020?", - "Which borough of New York City has the highest population?", - "What is the economic significance of New York City?", - "How did New York City get its name?", - "What is the significance of the Statue of Liberty in New York City?", -] - -eval_answers = [ - "8,804,190", - "Brooklyn", - "New York City's economic significance is vast, as it serves as the global financial capital, housing Wall Street and major financial institutions. Its diverse economy spans technology, media, healthcare, education, and more, making it resilient to economic fluctuations. NYC is a hub for international business, attracting global companies, and boasts a large, skilled labor force. Its real estate market, tourism, cultural industries, and educational institutions further fuel its economic prowess. The city's transportation network and global influence amplify its impact on the world stage, solidifying its status as a vital economic player and cultural epicenter.", - "New York City got its name when it came under British control in 1664. King Charles II of England granted the lands to his brother, the Duke of York, who named the city New York in his own honor.", - "The Statue of Liberty in New York City holds great significance as a symbol of the United States and its ideals of liberty and peace. It greeted millions of immigrants who arrived in the U.S. by ship in the late 19th and early 20th centuries, representing hope and freedom for those seeking a better life. It has since become an iconic landmark and a global symbol of cultural diversity and freedom.", -] - -examples = [ - {"query": q, "ground_truth": [eval_answers[i]]} - for i, q in enumerate(eval_questions) -] -``` - -## Introducing `RagasEvaluatorChain` - -`RagasEvaluatorChain` creates a wrapper around the metrics ragas provides (documented [here](https://github.com/explodinggradients/ragas/blob/main/docs/concepts/metrics/index.md)), making it easier to run these evaluation with langchain and langsmith. - -The evaluator chain has the following APIs - -- `__call__()`: call the `RagasEvaluatorChain` directly on the result of a QA chain. -- `evaluate()`: evaluate on a list of examples (with the input queries) and predictions (outputs from the QA chain). -- `evaluate_run()`: method implemented that is called by langsmith evaluators to evaluate langsmith datasets. - -lets see each of them in action to learn more. - - -```python -result = qa_chain({"query": eval_questions[1]}) -result["result"] -``` - - -```python -result = qa_chain(examples[4]) -result["result"] -``` - - -```python -from ragas.langchain.evalchain import RagasEvaluatorChain -from ragas.metrics import ( - faithfulness, - answer_relevancy, - context_precision, - context_recall, -) - -# create evaluation chains -faithfulness_chain = RagasEvaluatorChain(metric=faithfulness) -answer_rel_chain = RagasEvaluatorChain(metric=answer_relevancy) -context_rel_chain = RagasEvaluatorChain(metric=context_precision) -context_recall_chain = RagasEvaluatorChain(metric=context_recall) -``` - -1. `__call__()` - -Directly run the evaluation chain with the results from the QA chain. Do note that metrics like context_precision and faithfulness require the `source_documents` to be present. - - -```python -# Recheck the result that we are going to validate. -result -``` - -**Faithfulness** - - -```python -eval_result = faithfulness_chain(result) -eval_result["faithfulness_score"] -``` - -High faithfulness_score means that there are exact consistency between the source documents and the answer. - -You can check lower faithfulness scores by changing the result (answer from LLM) or source_documents to something else. - - -```python -fake_result = result.copy() -fake_result["result"] = "we are the champions" -eval_result = faithfulness_chain(fake_result) -eval_result["faithfulness_score"] -``` - -**Context Recall** - - -```python -eval_result = context_recall_chain(result) -eval_result["context_recall_score"] -``` - -High context_recall_score means that the ground truth is present in the source documents. - -You can check lower context recall scores by changing the source_documents to something else. - - -```python -from langchain.schema import Document - -fake_result = result.copy() -fake_result["source_documents"] = [Document(page_content="I love christmas")] -eval_result = context_recall_chain(fake_result) -eval_result["context_recall_score"] -``` - -2. `evaluate()` - -Evaluate a list of inputs/queries and the outputs/predictions from the QA chain. - - -```python -# run the queries as a batch for efficiency -predictions = qa_chain.batch(examples) - -# evaluate -print("evaluating...") -r = faithfulness_chain.evaluate(examples, predictions) -r -``` - - -```python -# evaluate context recall -print("evaluating...") -r = context_recall_chain.evaluate(examples, predictions) -r -``` - -## Evaluate with langsmith - -[Langsmith](https://docs.smith.langchain.com/) is a platform that helps to debug, test, evaluate and monitor chains and agents built on any LLM framework. It also seamlessly integrates with LangChain. - -Langsmith also has a tools to build a testing dataset and run evaluations against them and with `RagasEvaluatorChain` you can use the ragas metrics for running langsmith evaluations as well. To know more about langsmith evaluations checkout the [quickstart](https://docs.smith.langchain.com/evaluation/quickstart). - - -Lets start of creating the dataset with the NYC questions listed in `eval_questions`. Create a new langsmith dataset and upload the questions. - - -```python -# dataset creation - -from langsmith import Client -from langsmith.utils import LangSmithError - -client = Client() -dataset_name = "NYC test" - -try: - # check if dataset exists - dataset = client.read_dataset(dataset_name=dataset_name) - print("using existing dataset: ", dataset.name) -except LangSmithError: - # if not create a new one with the generated query examples - dataset = client.create_dataset( - dataset_name=dataset_name, description="NYC test dataset" - ) - for e in examples: - client.create_example( - inputs={"query": e["query"]}, - outputs={"ground_truth": e["ground_truth"]}, - dataset_id=dataset.id, - ) - - print("Created a new dataset: ", dataset.name) -``` - -![](../../_static/langsmith-dataset.png) - -As you can see the questions have been uploaded. Now you can run your QA chain against this test dataset and compare the results in the langchain platform. - -Before you call `run_on_dataset` you need a factory function which creates a new instance of the QA chain you want to test. This is so that the internal state is not reused when running against each example. - - -```python -# factory function that return a new qa chain -def create_qa_chain(return_context=True): - qa_chain = RetrievalQA.from_chain_type( - llm, - retriever=index.vectorstore.as_retriever(), - return_source_documents=return_context, - ) - return qa_chain -``` - -Now lets run the evaluation - - -```python -from langchain.smith import RunEvalConfig, run_on_dataset - -evaluation_config = RunEvalConfig( - custom_evaluators=[ - faithfulness_chain, - answer_rel_chain, - context_rel_chain, - context_recall_chain, - ], - prediction_key="result", -) - -result = run_on_dataset( - client, - dataset_name, - create_qa_chain, - evaluation=evaluation_config, - input_mapper=lambda x: x, -) -``` - -You can follow the link to open the result for the run in langsmith. Check out the scores for each example too - -![](../../_static/langsmith-evaluation.png) - -Now if you want to dive more into the reasons for the scores and how to improve them, click on any example and open the feedback tab. This will show you each scores. - -![](../../_static/langsmith-feedback.png) - -You can also see the curresponding `RagasEvaluatorChain` trace too to figure out why ragas scored the way it did. - -![](../../_static/langsmith-ragas-chain-trace.png) diff --git a/docs/howtos/integrations/index.md b/docs/howtos/integrations/index.md index f3fba6ee39..b0c0db3484 100644 --- a/docs/howtos/integrations/index.md +++ b/docs/howtos/integrations/index.md @@ -8,7 +8,7 @@ happy to look into it 🙂 ## Frameworks -- [Langchain](./_langchain.md) - Langchain is a framework for building LLM applications, more information can be found [here](https://www.langchain.com/). +- [Langchain](./langchain.md) - Langchain is a framework for building LLM applications, more information can be found [here](https://www.langchain.com/). - [LlamaIndex](./_llamaindex.md) - LlamaIndex is a framework for building RAG applications, more information can be found [here](https://www.llamaindex.ai/). ## Tracing Tools diff --git a/docs/howtos/integrations/langchain.md b/docs/howtos/integrations/langchain.md new file mode 100644 index 0000000000..353e4d3883 --- /dev/null +++ b/docs/howtos/integrations/langchain.md @@ -0,0 +1,173 @@ +# LangChain Integration + +This tutorial demonstrates how to evaluate a RAG-based Q&A application built with LangChain using Ragas. Additionally, we will explore how the Ragas App can help analyze and enhance the application's performance. + +### Building a simple Q&A application + +To build a question-answering system, we start by creating a small dataset and indexing it using its embeddings in a vector database. + + +```python +import os +from dotenv import load_dotenv +from langchain_core.documents import Document + +load_dotenv() + +content_list = [ + "Andrew Ng is the CEO of Landing AI and is known for his pioneering work in deep learning. He is also widely recognized for democratizing AI education through platforms like Coursera.", + "Sam Altman is the CEO of OpenAI and has played a key role in advancing AI research and development. He is a strong advocate for creating safe and beneficial AI technologies.", + "Demis Hassabis is the CEO of DeepMind and is celebrated for his innovative approach to artificial intelligence. He gained prominence for developing systems that can master complex games like AlphaGo.", + "Sundar Pichai is the CEO of Google and Alphabet Inc., and he is praised for leading innovation across Google's vast product ecosystem. His leadership has significantly enhanced user experiences on a global scale.", + "Arvind Krishna is the CEO of IBM and is recognized for transforming the company towards cloud computing and AI solutions. He focuses on providing cutting-edge technologies to address modern business challenges.", +] + +langchain_documents = [] + +for content in content_list: + langchain_documents.append( + Document( + page_content=content, + ) + ) +``` + + +```python +from langchain_openai.embeddings import OpenAIEmbeddings +from langchain_core.vectorstores import InMemoryVectorStore + +embeddings = OpenAIEmbeddings(model="text-embedding-3-small") +vector_store = InMemoryVectorStore(embeddings) + +_ = vector_store.add_documents(langchain_documents) +``` + +We will now build a RAG-based system that integrates the retriever, LLM, and prompt into a Retrieval QA Chain. The retriever fetches relevant documents from a knowledge base. LLM will generate responses based on the retrieved documents using the Prompt which will guide the model's response, helping it understand the context and generate relevant and coherent language-based output. + +In LangChain, we can create a retriever from a vector store by using its `.as_retriever` method. For more details, refer to the [LangChain documentation on vector store retrievers](https://python.langchain.com/docs/how_to/vectorstore_retriever/). + + +```python +retriever = vector_store.as_retriever(search_kwargs={"k": 1}) +``` + + +```python +from langchain_openai import ChatOpenAI + +llm = ChatOpenAI(model="gpt-4o-mini") +``` + +We will define a Chain that processes the user query and retrieved relevant data, passing it to the model within a structured prompt. The model's output is then parsed to generate the final response as a string. + + +```python +from langchain_core.prompts import ChatPromptTemplate +from langchain_core.output_parsers import StrOutputParser + + +template = """Answer the question based only on the following context: +{context} + +Question: {query} +""" +prompt = ChatPromptTemplate.from_template(template) + +qa_chain = prompt | llm | StrOutputParser() +``` + + +```python +def format_docs(relevant_docs): + return "\n".join(doc.page_content for doc in relevant_docs) + + +query = "Who is the CEO of OpenAI?" + +relevant_docs = retriever.invoke(query) +qa_chain.invoke({"context": format_docs(relevant_docs), "query": query}) +``` +Output: +``` +'The CEO of OpenAI is Sam Altman.' +``` + + +### Evaluate + + +```python +sample_queries = [ + "Which CEO is widely recognized for democratizing AI education through platforms like Coursera?", + "Who is Sam Altman?", + "Who is Demis Hassabis and how did he gained prominence?", + "Who is the CEO of Google and Alphabet Inc., praised for leading innovation across Google's product ecosystem?", + "How did Arvind Krishna transformed IBM?", +] + +expected_responses = [ + "Andrew Ng is the CEO of Landing AI and is widely recognized for democratizing AI education through platforms like Coursera.", + "Sam Altman is the CEO of OpenAI and has played a key role in advancing AI research and development. He strongly advocates for creating safe and beneficial AI technologies.", + "Demis Hassabis is the CEO of DeepMind and is celebrated for his innovative approach to artificial intelligence. He gained prominence for developing systems like AlphaGo that can master complex games.", + "Sundar Pichai is the CEO of Google and Alphabet Inc., praised for leading innovation across Google's vast product ecosystem. His leadership has significantly enhanced user experiences globally.", + "Arvind Krishna is the CEO of IBM and has transformed the company towards cloud computing and AI solutions. He focuses on delivering cutting-edge technologies to address modern business challenges.", +] +``` + +To evaluate the Q&A system we need to structure the queries, expected_responses and other metric secpific requirments to [EvaluationDataset][ragas.dataset_schema.EvaluationDataset]. + + +```python +from ragas import EvaluationDataset + + +dataset = [] + +for query, reference in zip(sample_queries, expected_responses): + relevant_docs = retriever.invoke(query) + response = qa_chain.invoke({"context": format_docs(relevant_docs), "query": query}) + dataset.append( + { + "user_input": query, + "retrieved_contexts": [rdoc.page_content for rdoc in relevant_docs], + "response": response, + "reference": reference, + } + ) + +evaluation_dataset = EvaluationDataset.from_list(dataset) +``` + +To evauate our Q&A application we will use the following metrices. + + +- `LLMContextRecall`: Evaluates how well retrieved contexts align with claims in the reference answer, estimating recall without manual reference context annotations. +- `Faithfulness`: Assesses whether all claims in the generated answer can be inferred directly from the provided context. +- `Factual Correctness`: Checks the factual accuracy of the generated response by comparing it with a reference, using claim-based evaluation and natural language inference. + +For more details on these metrics and how they apply to evaluating RAG systems, visit [Ragas Metrics Documentation](/docs/concepts/metrics/available_metrics/). + + +```python +from ragas import evaluate +from ragas.llms import LangchainLLMWrapper +from ragas.metrics import LLMContextRecall, Faithfulness, FactualCorrectness + +evaluator_llm = LangchainLLMWrapper(llm) + +result = evaluate( + dataset=evaluation_dataset, + metrics=[LLMContextRecall(), Faithfulness(), FactualCorrectness()], + llm=evaluator_llm, +) + +result +``` + +Output +``` +{'context_recall': 1.0000, 'faithfulness': 0.9000, 'factual_correctness': 0.9260} +``` + +Check out [app.ragas.io](http://app.ragas.io) for a more detailed analysis, including interactive visualizations and metrics. You'll need to create an account and generate a [Ragas API key](https://app.ragas.io/dashboard/settings/app-tokens) to upload and explore your results. \ No newline at end of file diff --git a/mkdocs.yml b/mkdocs.yml index 78aaa2ef2a..849e8ea0be 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -100,6 +100,7 @@ nav: - howtos/integrations/index.md - LlamaIndex: howtos/integrations/_llamaindex.md - Arize: howtos/integrations/_arize.md + - LangChain: howtos/integrations/langchain.md - LangGraph: howtos/integrations/_langgraph_agent_evaluation.md - Migrations: - From v0.1 to v0.2: howtos/migrations/migrate_from_v01_to_v02.md