Open source multi-modal RAG for building AI apps over private knowledge.
-
Updated
Apr 27, 2025 - Python
Open source multi-modal RAG for building AI apps over private knowledge.
A demo of Cache-Augmented Generation (CAG) in an LLM
Integrate Anyparser's powerful content extraction capabilities with LangChain for enhanced AI workflows. This integration package enables seamless use of Anyparser's document processing and data extraction features within your LangChain applications.
This repository demonstrates Cache-Augmented Generation (CAG) using the Mistral-7B model.
Anyparser Typescript SDK for RAG/ETL Pipelines - File Content Extraction. Supports extraction from various file formats including PDF, Microsoft Office documents, OCR/Image to Text, Audio to Text, and Website to Text.
Anyparser Python SDK for RAG/ETL Pipelines - File Content Extraction. Supports extraction from various file formats including PDF, Microsoft Office documents, OCR/Image to Text, Audio to Text, and Website to Text.
Supercharge your AI workflows by combining Anyparser’s advanced content extraction with Crew AI. With this integration, you can effortlessly leverage Anyparser’s document processing and data extraction tools within your Crew AI applications.
Instantly access Anyparser's robust document processing and data extraction capabilities directly within your LlamaIndex workflows. Enhance your AI applications with superior content understanding and data quality.
Add a description, image, and links to the cache-augmented-generation topic page so that developers can more easily learn about it.
To associate your repository with the cache-augmented-generation topic, visit your repo's landing page and select "manage topics."