Skip to content

LangChain Assistant using FastAPI, LangChain, and Retrieval-Augmented Generation (RAG). Supports loading data from sitemaps and PDFs for intelligent querying and conversation. Easily deployable with Docker.

Notifications You must be signed in to change notification settings

huseyindas/langchain-assistant

Repository files navigation

LangChain Assistant

This repository contains a LangChain-based assistant built using FastAPI, LangChain, and Retrieval-Augmented Generation (RAG). The assistant can load data from sitemaps and PDFs, vectorize it, and store it in a database for intelligent querying. It also provides a chat interface for interacting with the bot.

Features

  • Health Endpoint: Monitor the health of the service.
  • Chat Endpoint (/bot/chat): Interact with the assistant in real-time.
  • Load Endpoint (/bot/load): Vectorize and store data from sitemaps and PDFs.
  • Custom Prompt Support: The assistant uses a prompt defined in the prompt.txt file located in the root directory.

Setup and Installation

Prerequisites

  • Docker
  • Docker Compose

Environment Variables

Create an .env file in the root directory or use the provided example.env file to set up your environment variables.

Docker Setup

Build and run the application using Docker Compose:

docker-compose up --build

About

LangChain Assistant using FastAPI, LangChain, and Retrieval-Augmented Generation (RAG). Supports loading data from sitemaps and PDFs for intelligent querying and conversation. Easily deployable with Docker.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published