-
-
Notifications
You must be signed in to change notification settings - Fork 137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🤖 Update readme.md with RAG example #32
Conversation
Could you check the readme.md file? Which input formats does it support? Just str? Does it support extensions such as mp4, pdf, audio, md? We need it. How can we do this? |
@kadirnar in readme add question |
@kadirnar Also, should we provide an option for users to specify their text input path? Here's the general workflow: Pass the YouTube link to the system, like so:
need to update saving .text part
After processing, save the transcript as a text file in the same folder. Use the RAG (Retrieval-Augmented Generation) model for asking questions about the video. By default, the system will use the transcript.txt file. Users can ask questions to gain insights from the video content. There's no need to support PDF files in this scenario; text format will suffice. This workflow provides a complete end-to-end example for those who want to summarize YouTube videos or any other video content." |
I understand There is no need right now. I can add it later. We shouldn't recognize manual models. Can you fix this? https://github.com/kadirnar/whisper-plus/blob/main/whisperplus/pipelines/chatbot.py#L12 |
can we add openai embedding model there? |
@kadirnar whats issue we are getting for that embedding model? should we add this?
but need to provide the api ,that not possible for each user below is also opensource
|
I want the user to decide this. You must add model_path parameter to the function. When I want to change the model, I do not want to look at the source codes. We just have to do this by using the load_llm_model function. |
@kadirnar got it ill work on it |
No description provided.