Bridging Ancient Wisdom and Modern Accessibility
Understanding spiritual texts like the Bhagavad Gita and Patanjali Yoga Sutras is challenging due to:
- Complex language.
- Contextual ambiguity.
- Information overload.
Jigyasa simplifies access to spiritual wisdom using a Retrieval-Augmented Generation (RAG) pipeline.
It retrieves relevant shlokas and generates contextually accurate responses through advanced AI techniques and an intuitive interface.
- Built for seamless and intuitive user interaction.
- Combines semantic retrieval with structured knowledge retrieval.
- Ensures traceability, transparency, and debugging support.
- Responses are backed by citations and confidence scoring for reliability.
- Model quantization reduces costs and improves inference speed.
- Designed to integrate new texts and support multilingual capabilities.
- Uses Pinecone Vector DB for efficient semantic search.
- Reranks results using BERT-based models for relevance.
- Powered by LLaMA 3.1, with prompt engineering for accurate, citation-backed answers.
- Integrated with LangSmith for tracking and debugging the RAG pipeline.
An overview of the architecture, from query processing to response generation.
- Retrieval Accuracy: 71% F1-score on test queries.
- Cost Reduction: 35% savings through model quantization.
- Latency: 2x faster inference with an average of 150ms/query.
- Agentic AI: Enable dynamic workflows for better decision-making.
- Multilingual Support: Expand accessibility to different languages.
- Cross-referencing: Enhance reranking and interlinking mechanisms.
- Front-end: Web UI
- Back-end: LLaMA 3.1, Pinecone
- Development Tools: LangSmith, Python
Contributions are welcome! Please fork the repository, create a new branch, and submit a pull request.
This project is licensed under the MIT License. See the LICENSE file for details.
Jigyasa bridges ancient wisdom and modern accessibility with innovative technology, ensuring a seamless exploration of spiritual texts.