From 546c765e05c2fabb10cf6a98126d699624bec2d7 Mon Sep 17 00:00:00 2001 From: ChengZi Date: Fri, 27 Sep 2024 15:55:08 +0800 Subject: [PATCH] apify black format Signed-off-by: ChengZi --- .../integration/apify_milvus_rag.ipynb | 819 +++++++++--------- 1 file changed, 409 insertions(+), 410 deletions(-) diff --git a/bootcamp/tutorials/integration/apify_milvus_rag.ipynb b/bootcamp/tutorials/integration/apify_milvus_rag.ipynb index 270bea863..1d33acab4 100644 --- a/bootcamp/tutorials/integration/apify_milvus_rag.ipynb +++ b/bootcamp/tutorials/integration/apify_milvus_rag.ipynb @@ -1,431 +1,430 @@ { - "nbformat": 4, - "nbformat_minor": 0, - "metadata": { + "nbformat": 4, + "nbformat_minor": 0, + "metadata": { + "colab": { + "provenance": [] + }, + "kernelspec": { + "name": "python3", + "display_name": "Python 3" + }, + "language_info": { + "name": "python" + } + }, + "cells": [ + { + "cell_type": "markdown", + "source": [ + "# Retrieval-Augmented Generation: Crawling Websites with Apify and Saving Data to Milvus for Question Answering\n", + "\n", + "This tutorial explains how to crawl websites using Apify's Website Content Crawler and save the data into Milvus/Zilliz vector database to be later used for question answering.\n", + "\n", + "[Apify](https://apify.com/) is a web scraping and data extraction platform that offers an app marketplace with over two thousand ready-made cloud tools, known as Actors. These tools are ideal for use cases such as extracting structured data from e-commerce websites, social media, search engines, online maps, and more.\n", + "\n", + "For example, the [Website Content Crawler](https://apify.com/apify/website-content-crawler) Actor can deeply crawl websites, clean their HTML by removing a cookies modal, footer, or navigation, and then transform the HTML into Markdown.\n", + "\n", + "The Apify integration for Milvus/Zilliz makes it easy to upload data from the web to the vector database.\n", + "\n", + "# Before you begin\n", + "\n", + "Before running this notebook, make sure you have the following:\n", + "\n", + "- an Apify account and [APIFY_API_TOKEN](https://docs.apify.com/platform/integrations/api).\n", + "\n", + "- an OpenAI account and [OPENAI_API_KEY](https://platform.openai.com/docs/quickstart)\n", + "\n", + "- A [Zilliz Cloud account](https://cloud.zilliz.com) (a fully managed cloud service for Milvus).\n", + "- The Zilliz database URI and Token\n", + "\n", + "\n", + "## Install dependencies" + ], + "metadata": { + "id": "t1BeKtSo7KzI" + } + }, + { + "cell_type": "code", + "source": [ + "!pip install --upgrade --quiet apify==1.7.2 langchain-core==0.3.5 langchain-milvus==0.1.5 langchain-openai==0.2.0" + ], + "metadata": { + "id": "r5AJeMOE1Cou" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "## Set up Apify and Open API keys\n", + "\n" + ], + "metadata": { + "id": "h6MmIG9K1HkK" + } + }, + { + "cell_type": "code", + "source": [ + "import os\n", + "from getpass import getpass\n", + "\n", + "os.environ[\"APIFY_API_TOKEN\"] = getpass(\"Enter YOUR APIFY_API_TOKEN\")\n", + "os.environ[\"OPENAI_API_KEY\"] = getpass(\"Enter YOUR OPENAI_API_KEY\")" + ], + "metadata": { + "id": "yiUTwYzP36Yr", "colab": { - "provenance": [] - }, - "kernelspec": { - "name": "python3", - "display_name": "Python 3" + "base_uri": "https://localhost:8080/" }, - "language_info": { - "name": "python" + "outputId": "8f233e7b-dfd0-4c5b-ce19-0d9bffc67d49" + }, + "execution_count": null, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Enter YOUR APIFY_API_TOKEN··········\n", + "Enter YOUR OPENAI_API_KEY··········\n" + ] } + ] }, - "cells": [ - { - "cell_type": "markdown", - "source": [ - "# Retrieval-Augmented Generation: Crawling Websites with Apify and Saving Data to Milvus for Question Answering\n", - "\n", - "This tutorial explains how to crawl websites using Apify's Website Content Crawler and save the data into Milvus/Zilliz vector database to be later used for question answering.\n", - "\n", - "[Apify](https://apify.com/) is a web scraping and data extraction platform that offers an app marketplace with over two thousand ready-made cloud tools, known as Actors. These tools are ideal for use cases such as extracting structured data from e-commerce websites, social media, search engines, online maps, and more.\n", - "\n", - "For example, the [Website Content Crawler](https://apify.com/apify/website-content-crawler) Actor can deeply crawl websites, clean their HTML by removing a cookies modal, footer, or navigation, and then transform the HTML into Markdown.\n", - "\n", - "The Apify integration for Milvus/Zilliz makes it easy to upload data from the web to the vector database.\n", - "\n", - "# Before you begin\n", - "\n", - "Before running this notebook, make sure you have the following:\n", - "\n", - "- an Apify account and [APIFY_API_TOKEN](https://docs.apify.com/platform/integrations/api).\n", - "\n", - "- an OpenAI account and [OPENAI_API_KEY](https://platform.openai.com/docs/quickstart)\n", - "\n", - "- A [Zilliz Cloud account](https://cloud.zilliz.com) (a fully managed cloud service for Milvus).\n", - "- The Zilliz database URI and Token\n", - "\n", - "\n", - "## Install dependencies" - ], - "metadata": { - "id": "t1BeKtSo7KzI" - } - }, - { - "cell_type": "code", - "source": [ - "!pip install --upgrade --quiet apify==1.7.2 langchain-core==0.3.5 langchain-milvus==0.1.5 langchain-openai==0.2.0\n" - ], - "metadata": { - "id": "r5AJeMOE1Cou" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "## Set up Apify and Open API keys\n", - "\n" - ], - "metadata": { - "id": "h6MmIG9K1HkK" - } - }, - { - "cell_type": "code", - "source": [ - "import os\n", - "from getpass import getpass\n", - "\n", - "os.environ[\"APIFY_API_TOKEN\"] = getpass(\"Enter YOUR APIFY_API_TOKEN\")\n", - "os.environ[\"OPENAI_API_KEY\"] = getpass(\"Enter YOUR OPENAI_API_KEY\")\n" - ], - "metadata": { - "id": "yiUTwYzP36Yr", - "colab": { - "base_uri": "https://localhost:8080/" - }, - "outputId": "8f233e7b-dfd0-4c5b-ce19-0d9bffc67d49" - }, - "execution_count": null, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Enter YOUR APIFY_API_TOKEN··········\n", - "Enter YOUR OPENAI_API_KEY··········\n" - ] - } - ] - }, - { - "cell_type": "markdown", - "source": [ - "## Set up Milvus/Zilliz URI, token and collection name\n", - "\n", - "You need the URI and Token of your Milvus/Zilliz to setup the client.\n", - "- If you have self-deployed Milvus server on [Docker or Kubernetes](https://milvus.io/docs/quickstart.md), use the server address and port as your uri, e.g.`http://localhost:19530`. If you enable the authentication feature on Milvus, use \":\" as the token, otherwise leave the token as empty string.\n", - "- If you use [Zilliz Cloud](https://zilliz.com/cloud), the fully managed cloud service for Milvus, adjust the `uri` and `token`, which correspond to the [Public Endpoint and API key](https://docs.zilliz.com/docs/on-zilliz-cloud-console#cluster-details) in Zilliz Cloud.\n", - "\n", - "Note that the collection does not need to exist beforehand. It will be automatically created when data is uploaded to the database." - ], - "metadata": { - "id": "VN2vn2sISFls" - } - }, - { - "cell_type": "code", - "source": [ - "os.environ[\"MILVUS_URI\"] = getpass(\"Enter YOUR MILVUS_URI\")\n", - "os.environ[\"MILVUS_TOKEN\"] = getpass(\"Enter YOUR MILVUS_TOKEN\")\n", - "\n", - "MILVUS_COLLECTION_NAME=\"apify\"" - ], - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "_QW-DOWBSA7W", - "outputId": "e8bd3a5b-2409-4394-bace-2b15787a19f7" - }, - "execution_count": null, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Enter YOUR MILVUS_URI··········\n", - "Enter YOUR MILVUS_TOKEN··········\n" - ] - } - ] - }, - { - "cell_type": "markdown", - "source": [ - "## Using the Website Content Crawler to scrape text content from Milvus.io\n", - "\n", - "Next, we'll use the Website Content Crawler with the Apify Python SDK. We'll start by defining the actor_id and run_input, then specify the information that will be saved to the vector database.\n", - "\n", - "The `actor_id=\"apify/website-content-crawler\"` is the identifier for the Website Content Crawler. The crawler's behavior can be fully controlled via the run_input parameters (see the [input page](https://apify.com/apify/website-content-crawler/input-schema) for more details). In this example, we’ll be crawling the Milvus documentation, which doesn’t require JavaScript rendering. Therefore, we set `crawlerType=cheerio`, define `startUrls`, and limit the number of crawled pages by setting `maxCrawlPages=10`." - ], - "metadata": { - "id": "HQzAujMc505k" - } - }, - { - "cell_type": "code", - "source": [ - "from apify_client import ApifyClient\n", - "\n", - "client = ApifyClient(os.getenv(\"APIFY_API_TOKEN\"))\n", - "\n", - "actor_id = \"apify/website-content-crawler\"\n", - "run_input = {\n", - " \"crawlerType\": \"cheerio\",\n", - " \"maxCrawlPages\": 10,\n", - " \"startUrls\": [\n", - " {\"url\": \"https://milvus.io/\"},\n", - " {\"url\": \"https://zilliz.com/\"}\n", - " ]\n", - "}\n", - "\n", - "actor_call = client.actor(actor_id).call(run_input=run_input)" - ], - "metadata": { - "id": "_AYgcfBx681h" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "The Website Content Crawler will thoroughly crawl the site until it reaches the predefined limit set by `maxCrawlPages`. The scraped data will be stored in a `Dataset` on the Apify platform. To access and analyze this data, you can use the `defaultDatasetId`" - ], - "metadata": { - "id": "yIODy29t-_JY" - } - }, - { - "cell_type": "code", - "source": [ - "dataset_id = actor_call[\"defaultDatasetId\"]\n", - "dataset_id" - ], - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/", - "height": 35 - }, - "id": "ZXmYlBWVUO44", - "outputId": "b5a0df26-bc3d-44a0-909b-ccbae3cbf50e" - }, - "execution_count": null, - "outputs": [ - { - "output_type": "execute_result", - "data": { - "text/plain": [ - "'P9dLFfeJAljlePWnz'" - ], - "application/vnd.google.colaboratory.intrinsic+json": { - "type": "string" - } - }, - "metadata": {}, - "execution_count": 15 - } - ] - }, - { - "cell_type": "markdown", - "source": [ - "The following code fetches the scraped data from the Apify `Dataset` and displays the first scraped website" - ], - "metadata": { - "id": "tP-ctO9bVmt3" - } - }, - { - "cell_type": "code", - "source": [ - "item = client.dataset(dataset_id).list_items(limit=1).items\n", - "item[0].get(\"text\")" - ], - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/", - "height": 109 - }, - "id": "3RikxWJwUVnr", - "outputId": "08d31136-f721-43bc-8fc5-75d51a333e04" - }, - "execution_count": null, - "outputs": [ - { - "output_type": "execute_result", - "data": { - "text/plain": [ - "'The High-Performance Vector Database Built for Scale\\nStart running Milvus in seconds\\nfrom pymilvus import MilvusClient client = MilvusClient(\"milvus_demo.db\") client.create_collection( collection_name=\"demo_collection\", dimension=5 )\\nDeployment Options to Match Your Unique Journey\\nMilvus Lite\\nLightweight, easy to start\\nVectorDB-as-a-library runs in notebooks/ laptops with a pip install\\nBest for learning and prototyping\\nMilvus Standalone\\nRobust, single-machine deployment\\nComplete vector database for production or testing\\nIdeal for datasets with up to millions of vectors\\nMilvus Distributed\\nScalable, enterprise-grade solution\\nHighly reliable and distributed vector database with comprehensive toolkit\\nScale horizontally to handle billions of vectors\\nZilliz Cloud\\nFully managed with minimal operations\\nAvailable in both serverless and dedicated cluster\\nSaaS and BYOC options for different security and compliance requirements\\nTry Free\\nLearn more about different Milvus deployment models\\nLoved by GenAI developers\\nBased on our research, Milvus was selected as the vector database of choice (over Chroma and Pinecone). Milvus is an open-source vector database designed specifically for similarity search on massive datasets of high-dimensional vectors.\\nWith its focus on efficient vector similarity search, Milvus empowers you to build robust and scalable image retrieval systems. Whether you’re managing a personal photo library or developing a commercial image search application, Milvus offers a powerful foundation for unlocking the hidden potential within your image collections.\\nBhargav Mankad\\nSenior Solution Architect\\nMilvus is a powerful vector database tailored for processing and searching extensive vector data. It stands out for its high performance and scalability, rendering it perfect for machine learning, deep learning, similarity search tasks, and recommendation systems.\\nIgor Gorbenko\\nBig Data Architect\\nStart building your GenAI app now\\nGuided with notebooks developed by us and our community\\nRAG\\nTry Now\\nImage Search\\nTry Now\\nMultimodal Search\\nTry Now\\nUnstructured Data Meetups\\nJoin a Community of Passionate Developers and Engineers Dedicated to Gen AI.\\nRSVP now\\nWhy Developers Prefer Milvus for Vector Databases\\nScale as needed\\nElastic scaling to tens of billions of vectors with distributed architecture.\\nBlazing fast\\nRetrieve data quickly and accurately with Global Index, regardless of scale.\\nReusable Code\\nWrite once, and deploy with one line of code into the production environment.\\nFeature-rich\\nMetadata filtering, hybrid search, multi-vector and more.\\nWant to learn more about Milvus? View our documentation\\nJoin the community of developers building GenAI apps with Milvus, now with over 25 million downloads\\nGet Milvus Updates\\nSubscribe to get updates on the latest Milvus releases, tutorials and training from Zilliz, the creator and key maintainer of Milvus.'" - ], - "application/vnd.google.colaboratory.intrinsic+json": { - "type": "string" - } - }, - "metadata": {}, - "execution_count": 16 - } - ] - }, - { - "cell_type": "markdown", - "source": [ - "To upload data into the Milvus database, we use the [Apify Milvus integration](https://apify.com/apify/milvus-integration). First, we need to set up the parameter for the Milvus database. Next, we select the fields (`datasetFields`) that we want to store in the database. In the example below, we are saving the `text` field and `metadata.title`." - ], - "metadata": { - "id": "HFGbAIc5Vu1y" - } - }, - { - "cell_type": "code", - "source": [ - "milvus_integration_inputs = {\n", - " \"milvusUri\": os.getenv(\"MILVUS_URI\"),\n", - " \"milvusToken\": os.getenv(\"MILVUS_TOKEN\"),\n", - " \"milvusCollectionName\": MILVUS_COLLECTION_NAME,\n", - " \"datasetFields\": [\"text\", \"metadata.title\"],\n", - " \"datasetId\": actor_call[\"defaultDatasetId\"],\n", - " \"performChunking\": True,\n", - " \"embeddingsApiKey\": os.getenv(\"OPENAI_API_KEY\"),\n", - " \"embeddingsProvider\": \"OpenAI\",\n", - "}" - ], - "metadata": { - "id": "OZ0PAVHI_mhn" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "Now, we'll call the `apify/milvus-integration` to store the data" - ], - "metadata": { - "id": "xtFquWflA5kf" - } + { + "cell_type": "markdown", + "source": [ + "## Set up Milvus/Zilliz URI, token and collection name\n", + "\n", + "You need the URI and Token of your Milvus/Zilliz to setup the client.\n", + "- If you have self-deployed Milvus server on [Docker or Kubernetes](https://milvus.io/docs/quickstart.md), use the server address and port as your uri, e.g.`http://localhost:19530`. If you enable the authentication feature on Milvus, use \":\" as the token, otherwise leave the token as empty string.\n", + "- If you use [Zilliz Cloud](https://zilliz.com/cloud), the fully managed cloud service for Milvus, adjust the `uri` and `token`, which correspond to the [Public Endpoint and API key](https://docs.zilliz.com/docs/on-zilliz-cloud-console#cluster-details) in Zilliz Cloud.\n", + "\n", + "Note that the collection does not need to exist beforehand. It will be automatically created when data is uploaded to the database." + ], + "metadata": { + "id": "VN2vn2sISFls" + } + }, + { + "cell_type": "code", + "source": [ + "os.environ[\"MILVUS_URI\"] = getpass(\"Enter YOUR MILVUS_URI\")\n", + "os.environ[\"MILVUS_TOKEN\"] = getpass(\"Enter YOUR MILVUS_TOKEN\")\n", + "\n", + "MILVUS_COLLECTION_NAME = \"apify\"" + ], + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" }, + "id": "_QW-DOWBSA7W", + "outputId": "e8bd3a5b-2409-4394-bace-2b15787a19f7" + }, + "execution_count": null, + "outputs": [ { - "cell_type": "code", - "source": [ - "actor_call = client.actor(\"apify/milvus-integration\").call(run_input=milvus_integration_inputs)" - ], - "metadata": { - "id": "gdN7baGrA_lR" - }, - "execution_count": null, - "outputs": [] + "name": "stdout", + "output_type": "stream", + "text": [ + "Enter YOUR MILVUS_URI··········\n", + "Enter YOUR MILVUS_TOKEN··········\n" + ] + } + ] + }, + { + "cell_type": "markdown", + "source": [ + "## Using the Website Content Crawler to scrape text content from Milvus.io\n", + "\n", + "Next, we'll use the Website Content Crawler with the Apify Python SDK. We'll start by defining the actor_id and run_input, then specify the information that will be saved to the vector database.\n", + "\n", + "The `actor_id=\"apify/website-content-crawler\"` is the identifier for the Website Content Crawler. The crawler's behavior can be fully controlled via the run_input parameters (see the [input page](https://apify.com/apify/website-content-crawler/input-schema) for more details). In this example, we’ll be crawling the Milvus documentation, which doesn’t require JavaScript rendering. Therefore, we set `crawlerType=cheerio`, define `startUrls`, and limit the number of crawled pages by setting `maxCrawlPages=10`." + ], + "metadata": { + "id": "HQzAujMc505k" + } + }, + { + "cell_type": "code", + "source": [ + "from apify_client import ApifyClient\n", + "\n", + "client = ApifyClient(os.getenv(\"APIFY_API_TOKEN\"))\n", + "\n", + "actor_id = \"apify/website-content-crawler\"\n", + "run_input = {\n", + " \"crawlerType\": \"cheerio\",\n", + " \"maxCrawlPages\": 10,\n", + " \"startUrls\": [{\"url\": \"https://milvus.io/\"}, {\"url\": \"https://zilliz.com/\"}],\n", + "}\n", + "\n", + "actor_call = client.actor(actor_id).call(run_input=run_input)" + ], + "metadata": { + "id": "_AYgcfBx681h" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "The Website Content Crawler will thoroughly crawl the site until it reaches the predefined limit set by `maxCrawlPages`. The scraped data will be stored in a `Dataset` on the Apify platform. To access and analyze this data, you can use the `defaultDatasetId`" + ], + "metadata": { + "id": "yIODy29t-_JY" + } + }, + { + "cell_type": "code", + "source": [ + "dataset_id = actor_call[\"defaultDatasetId\"]\n", + "dataset_id" + ], + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 35 }, + "id": "ZXmYlBWVUO44", + "outputId": "b5a0df26-bc3d-44a0-909b-ccbae3cbf50e" + }, + "execution_count": null, + "outputs": [ { - "cell_type": "markdown", - "source": [ - "All the scraped data is now stored in the Milvus database and is ready for retrieval and question answering\n", - "\n", - "# Retrieval and LLM generative pipeline\n", - "\n", - "Next, we'll define the retrieval-augmented pipeline using Langchain. The pipeline works in two stages:\n", - "\n", - "- Vectorstore (Milvus): Langchain retrieves relevant documents from Milvus by matching query embeddings with stored document embeddings.\n", - "- LLM Response: The retrieved documents provide context for the LLM (e.g., GPT-4) to generate an informed answer.\n", - "\n", - "For more details about the RAG chain, please refer to the [Langchain documentation](https://python.langchain.com/v0.2/docs/tutorials/rag/)." + "output_type": "execute_result", + "data": { + "text/plain": [ + "'P9dLFfeJAljlePWnz'" ], - "metadata": { - "id": "3hG6SvMm_mAB" + "application/vnd.google.colaboratory.intrinsic+json": { + "type": "string" } + }, + "metadata": {}, + "execution_count": 15 + } + ] + }, + { + "cell_type": "markdown", + "source": [ + "The following code fetches the scraped data from the Apify `Dataset` and displays the first scraped website" + ], + "metadata": { + "id": "tP-ctO9bVmt3" + } + }, + { + "cell_type": "code", + "source": [ + "item = client.dataset(dataset_id).list_items(limit=1).items\n", + "item[0].get(\"text\")" + ], + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 109 }, + "id": "3RikxWJwUVnr", + "outputId": "08d31136-f721-43bc-8fc5-75d51a333e04" + }, + "execution_count": null, + "outputs": [ { - "cell_type": "code", - "source": [ - "from langchain_core.output_parsers import StrOutputParser\n", - "from langchain_core.prompts import PromptTemplate\n", - "from langchain_core.runnables import RunnablePassthrough\n", - "from langchain_milvus.vectorstores import Milvus\n", - "from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n", - "\n", - "embeddings = OpenAIEmbeddings(model=\"text-embedding-3-small\")\n", - "\n", - "vectorstore = Milvus(\n", - " connection_args={\n", - " \"uri\": os.getenv(\"MILVUS_URI\"),\n", - " \"token\": os.getenv(\"MILVUS_TOKEN\"),\n", - " },\n", - " embedding_function=embeddings,\n", - " collection_name=MILVUS_COLLECTION_NAME,\n", - ")\n", - "\n", - "prompt = PromptTemplate(\n", - " input_variables=[\"context\", \"question\"],\n", - " template=\"Use the following pieces of retrieved context to answer the question. If you don't know the answer, \"\n", - " \"just say that you don't know. \\nQuestion: {question} \\nContext: {context} \\nAnswer:\",\n", - ")\n", - "\n", - "\n", - "def format_docs(docs):\n", - " return \"\\n\\n\".join(doc.page_content for doc in docs)\n", - "\n", - "\n", - "rag_chain = (\n", - " {\n", - " \"context\": vectorstore.as_retriever() | format_docs,\n", - " \"question\": RunnablePassthrough(),\n", - " }\n", - " | prompt\n", - " | ChatOpenAI(model=\"gpt-4o-mini\")\n", - " | StrOutputParser()\n", - ")" - ], - "metadata": { - "id": "zKr0KTfhAQz6" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "Once we have the data in the database, we can start asking questions\n", - "\n", - "---\n", - "\n" + "output_type": "execute_result", + "data": { + "text/plain": [ + "'The High-Performance Vector Database Built for Scale\\nStart running Milvus in seconds\\nfrom pymilvus import MilvusClient client = MilvusClient(\"milvus_demo.db\") client.create_collection( collection_name=\"demo_collection\", dimension=5 )\\nDeployment Options to Match Your Unique Journey\\nMilvus Lite\\nLightweight, easy to start\\nVectorDB-as-a-library runs in notebooks/ laptops with a pip install\\nBest for learning and prototyping\\nMilvus Standalone\\nRobust, single-machine deployment\\nComplete vector database for production or testing\\nIdeal for datasets with up to millions of vectors\\nMilvus Distributed\\nScalable, enterprise-grade solution\\nHighly reliable and distributed vector database with comprehensive toolkit\\nScale horizontally to handle billions of vectors\\nZilliz Cloud\\nFully managed with minimal operations\\nAvailable in both serverless and dedicated cluster\\nSaaS and BYOC options for different security and compliance requirements\\nTry Free\\nLearn more about different Milvus deployment models\\nLoved by GenAI developers\\nBased on our research, Milvus was selected as the vector database of choice (over Chroma and Pinecone). Milvus is an open-source vector database designed specifically for similarity search on massive datasets of high-dimensional vectors.\\nWith its focus on efficient vector similarity search, Milvus empowers you to build robust and scalable image retrieval systems. Whether you’re managing a personal photo library or developing a commercial image search application, Milvus offers a powerful foundation for unlocking the hidden potential within your image collections.\\nBhargav Mankad\\nSenior Solution Architect\\nMilvus is a powerful vector database tailored for processing and searching extensive vector data. It stands out for its high performance and scalability, rendering it perfect for machine learning, deep learning, similarity search tasks, and recommendation systems.\\nIgor Gorbenko\\nBig Data Architect\\nStart building your GenAI app now\\nGuided with notebooks developed by us and our community\\nRAG\\nTry Now\\nImage Search\\nTry Now\\nMultimodal Search\\nTry Now\\nUnstructured Data Meetups\\nJoin a Community of Passionate Developers and Engineers Dedicated to Gen AI.\\nRSVP now\\nWhy Developers Prefer Milvus for Vector Databases\\nScale as needed\\nElastic scaling to tens of billions of vectors with distributed architecture.\\nBlazing fast\\nRetrieve data quickly and accurately with Global Index, regardless of scale.\\nReusable Code\\nWrite once, and deploy with one line of code into the production environment.\\nFeature-rich\\nMetadata filtering, hybrid search, multi-vector and more.\\nWant to learn more about Milvus? View our documentation\\nJoin the community of developers building GenAI apps with Milvus, now with over 25 million downloads\\nGet Milvus Updates\\nSubscribe to get updates on the latest Milvus releases, tutorials and training from Zilliz, the creator and key maintainer of Milvus.'" ], - "metadata": { - "id": "GxDNZ7LqAsWV" + "application/vnd.google.colaboratory.intrinsic+json": { + "type": "string" } + }, + "metadata": {}, + "execution_count": 16 + } + ] + }, + { + "cell_type": "markdown", + "source": [ + "To upload data into the Milvus database, we use the [Apify Milvus integration](https://apify.com/apify/milvus-integration). First, we need to set up the parameter for the Milvus database. Next, we select the fields (`datasetFields`) that we want to store in the database. In the example below, we are saving the `text` field and `metadata.title`." + ], + "metadata": { + "id": "HFGbAIc5Vu1y" + } + }, + { + "cell_type": "code", + "source": [ + "milvus_integration_inputs = {\n", + " \"milvusUri\": os.getenv(\"MILVUS_URI\"),\n", + " \"milvusToken\": os.getenv(\"MILVUS_TOKEN\"),\n", + " \"milvusCollectionName\": MILVUS_COLLECTION_NAME,\n", + " \"datasetFields\": [\"text\", \"metadata.title\"],\n", + " \"datasetId\": actor_call[\"defaultDatasetId\"],\n", + " \"performChunking\": True,\n", + " \"embeddingsApiKey\": os.getenv(\"OPENAI_API_KEY\"),\n", + " \"embeddingsProvider\": \"OpenAI\",\n", + "}" + ], + "metadata": { + "id": "OZ0PAVHI_mhn" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "Now, we'll call the `apify/milvus-integration` to store the data" + ], + "metadata": { + "id": "xtFquWflA5kf" + } + }, + { + "cell_type": "code", + "source": [ + "actor_call = client.actor(\"apify/milvus-integration\").call(\n", + " run_input=milvus_integration_inputs\n", + ")" + ], + "metadata": { + "id": "gdN7baGrA_lR" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "All the scraped data is now stored in the Milvus database and is ready for retrieval and question answering\n", + "\n", + "# Retrieval and LLM generative pipeline\n", + "\n", + "Next, we'll define the retrieval-augmented pipeline using Langchain. The pipeline works in two stages:\n", + "\n", + "- Vectorstore (Milvus): Langchain retrieves relevant documents from Milvus by matching query embeddings with stored document embeddings.\n", + "- LLM Response: The retrieved documents provide context for the LLM (e.g., GPT-4) to generate an informed answer.\n", + "\n", + "For more details about the RAG chain, please refer to the [Langchain documentation](https://python.langchain.com/v0.2/docs/tutorials/rag/)." + ], + "metadata": { + "id": "3hG6SvMm_mAB" + } + }, + { + "cell_type": "code", + "source": [ + "from langchain_core.output_parsers import StrOutputParser\n", + "from langchain_core.prompts import PromptTemplate\n", + "from langchain_core.runnables import RunnablePassthrough\n", + "from langchain_milvus.vectorstores import Milvus\n", + "from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n", + "\n", + "embeddings = OpenAIEmbeddings(model=\"text-embedding-3-small\")\n", + "\n", + "vectorstore = Milvus(\n", + " connection_args={\n", + " \"uri\": os.getenv(\"MILVUS_URI\"),\n", + " \"token\": os.getenv(\"MILVUS_TOKEN\"),\n", + " },\n", + " embedding_function=embeddings,\n", + " collection_name=MILVUS_COLLECTION_NAME,\n", + ")\n", + "\n", + "prompt = PromptTemplate(\n", + " input_variables=[\"context\", \"question\"],\n", + " template=\"Use the following pieces of retrieved context to answer the question. If you don't know the answer, \"\n", + " \"just say that you don't know. \\nQuestion: {question} \\nContext: {context} \\nAnswer:\",\n", + ")\n", + "\n", + "\n", + "def format_docs(docs):\n", + " return \"\\n\\n\".join(doc.page_content for doc in docs)\n", + "\n", + "\n", + "rag_chain = (\n", + " {\n", + " \"context\": vectorstore.as_retriever() | format_docs,\n", + " \"question\": RunnablePassthrough(),\n", + " }\n", + " | prompt\n", + " | ChatOpenAI(model=\"gpt-4o-mini\")\n", + " | StrOutputParser()\n", + ")" + ], + "metadata": { + "id": "zKr0KTfhAQz6" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "Once we have the data in the database, we can start asking questions\n", + "\n", + "---\n", + "\n" + ], + "metadata": { + "id": "GxDNZ7LqAsWV" + } + }, + { + "cell_type": "code", + "source": [ + "question = \"What is Milvus database?\"\n", + "\n", + "rag_chain.invoke(question)" + ], + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 72 }, + "id": "qfaWI6BaAko9", + "outputId": "25f563fe-28eb-455c-a168-8b013a30eb9a" + }, + "execution_count": null, + "outputs": [ { - "cell_type": "code", - "source": [ - "question = \"What is Milvus database?\"\n", - "\n", - "rag_chain.invoke(question)" - ], - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/", - "height": 72 - }, - "id": "qfaWI6BaAko9", - "outputId": "25f563fe-28eb-455c-a168-8b013a30eb9a" - }, - "execution_count": null, - "outputs": [ - { - "output_type": "execute_result", - "data": { - "text/plain": [ - "'Milvus is an open-source vector database specifically designed for billion-scale vector similarity search. It facilitates efficient management and querying of vector data, which is essential for applications involving unstructured data, such as AI and machine learning. Milvus allows users to perform operations like CRUD (Create, Read, Update, Delete) and vector searches, making it a powerful tool for handling large datasets.'" - ], - "application/vnd.google.colaboratory.intrinsic+json": { - "type": "string" - } - }, - "metadata": {}, - "execution_count": 20 - } - ] - }, - { - "cell_type": "markdown", - "source": [ - "# Conclusion\n", - "\n", - "In this tutorial, we demonstrated how to crawl website content using Apify, store the data in a Milvus vector database, and use a retrieval-augmented pipeline to perform question-answering tasks. By combining Apify's web scraping capabilities with Milvus/Zilliz for vector storage and Langchain for language models, you can build highly effective information retrieval systems.\n", - "\n", - "To improve data collection and updates in the database, the Apify integration offers [incremental updates](https://apify.com/apify/milvus-integration#incrementally-update-database-from-the-website-content-crawler), which updates only new or modified data based on checksums. Additionally, it can automatically [remove outdated](https://apify.com/apify/milvus-integration#delete-outdated-expired-data) data that hasn't been crawled within a specified time. These features help keep your vector database optimized and ensure that your retrieval-augmented pipeline remains efficient and up-to-date with minimal manual effort.\n", - "\n", - "For more details on Apify-Milvus integration, please refer to the [Apify Milvus documentation](https://docs.apify.com/platform/integrations/milvus) and the [integration README file](https://apify.com/apify/milvus-integration).\n" + "output_type": "execute_result", + "data": { + "text/plain": [ + "'Milvus is an open-source vector database specifically designed for billion-scale vector similarity search. It facilitates efficient management and querying of vector data, which is essential for applications involving unstructured data, such as AI and machine learning. Milvus allows users to perform operations like CRUD (Create, Read, Update, Delete) and vector searches, making it a powerful tool for handling large datasets.'" ], - "metadata": { - "id": "H4gmPwe3xqGk" + "application/vnd.google.colaboratory.intrinsic+json": { + "type": "string" } + }, + "metadata": {}, + "execution_count": 20 } - ] + ] + }, + { + "cell_type": "markdown", + "source": [ + "# Conclusion\n", + "\n", + "In this tutorial, we demonstrated how to crawl website content using Apify, store the data in a Milvus vector database, and use a retrieval-augmented pipeline to perform question-answering tasks. By combining Apify's web scraping capabilities with Milvus/Zilliz for vector storage and Langchain for language models, you can build highly effective information retrieval systems.\n", + "\n", + "To improve data collection and updates in the database, the Apify integration offers [incremental updates](https://apify.com/apify/milvus-integration#incrementally-update-database-from-the-website-content-crawler), which updates only new or modified data based on checksums. Additionally, it can automatically [remove outdated](https://apify.com/apify/milvus-integration#delete-outdated-expired-data) data that hasn't been crawled within a specified time. These features help keep your vector database optimized and ensure that your retrieval-augmented pipeline remains efficient and up-to-date with minimal manual effort.\n", + "\n", + "For more details on Apify-Milvus integration, please refer to the [Apify Milvus documentation](https://docs.apify.com/platform/integrations/milvus) and the [integration README file](https://apify.com/apify/milvus-integration).\n" + ], + "metadata": { + "id": "H4gmPwe3xqGk" + } + } + ] } \ No newline at end of file