From 870d5c118d21d5fcb4cc3c96e80458a54155f476 Mon Sep 17 00:00:00 2001 From: Theodore Rolle Date: Mon, 17 Jul 2023 08:37:51 -0700 Subject: [PATCH] Update README.md Fixing broken docs links --- README.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index fd2de7e20..b5b808c77 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # 🦜️🔗 ChatLangChain -This repo is an implementation of a locally hosted chatbot specifically focused on question answering over the [LangChain documentation](https://langchain.readthedocs.io/en/latest/). +This repo is an implementation of a locally hosted chatbot specifically focused on question answering over the [LangChain documentation](https://python.langchain.com/docs). Built with [LangChain](https://github.com/hwchase17/langchain/) and [FastAPI](https://fastapi.tiangolo.com/). The app leverages LangChain's streaming support and async API to update the page in real time for multiple users. @@ -8,9 +8,9 @@ The app leverages LangChain's streaming support and async API to update the page ## ✅ Running locally 1. Install dependencies: `pip install -r requirements.txt` 1. Run `ingest.sh` to ingest LangChain docs data into the vectorstore (only needs to be done once). - 1. You can use other [Document Loaders](https://langchain.readthedocs.io/en/latest/modules/document_loaders.html) to load your own data into the vectorstore. + 1. You can use other [Document Loaders](https://python.langchain.com/docs/modules/data_connection/document_loaders/) to load your own data into the vectorstore. 1. Run the app: `make start` - 1. To enable tracing, make sure `langchain-server` is running locally and pass `tracing=True` to `get_chain` in `main.py`. You can find more documentation [here](https://langchain.readthedocs.io/en/latest/tracing.html). + 1. To enable tracing, make sure `langchain-server` is running locally and pass `tracing=True` to `get_chain` in `main.py`. You can find more documentation [here](https://python.langchain.com/docs/modules/callbacks/how_to/tracing). 1. Open [localhost:9000](http://localhost:9000) in your browser. ## 🚀 Important Links @@ -30,11 +30,11 @@ There are two components: ingestion and question-answering. Ingestion has the following steps: 1. Pull html from documentation site -2. Load html with LangChain's [ReadTheDocs Loader](https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/readthedocs_documentation.html) -3. Split documents with LangChain's [TextSplitter](https://langchain.readthedocs.io/en/latest/reference/modules/text_splitter.html) +2. Load html with LangChain's [ReadTheDocs Loader](https://python.langchain.com/docs/modules/data_connection/document_loaders/integrations/readthedocs_documentation) +3. Split documents with LangChain's [TextSplitter](https://python.langchain.com/docs/modules/data_connection/document_transformers/#text-splitters) 4. Create a vectorstore of embeddings, using LangChain's [vectorstore wrapper](https://python.langchain.com/en/latest/modules/indexes/vectorstores.html) (with OpenAI's embeddings and FAISS vectorstore). -Question-Answering has the following steps, all handled by [ChatVectorDBChain](https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/chat_vector_db.html): +Question-Answering has the following steps, all handled by [ChatVectorDBChain](https://python.langchain.com/docs/modules/chains/popular/chat_vector_db): 1. Given the chat history and new user input, determine what a standalone question would be (using GPT-3). 2. Given that standalone question, look up relevant documents from the vectorstore.