-
Notifications
You must be signed in to change notification settings - Fork 301
Issues: SciPhi-AI/R2R
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
OLLAMA_API_BASE not taking effect when running without Docker serve, but in a container
#1690
opened Dec 12, 2024 by
jonathanortega2023
How to configure and avoid Open AI rate limiting when ingesting files?
#1601
opened Nov 17, 2024 by
emahpour
Future pending cb=[ _chain_future.<locals>._call_check_cancel() ] when using python
#1506
opened Oct 27, 2024 by
piobogiwo
Add support for LMStudio + MLX for maximal speed and efficiency on Apple Silicon
#1495
opened Oct 25, 2024 by
AriaShishegaran
Issue when ingesting a file and adding it to a collection right after
#1435
opened Oct 20, 2024 by
jeremi
Just running the r2r command in terminal or with some args throws exceptions
#1386
opened Oct 12, 2024 by
AriaShishegaran
SDK agent streaming responses breaks after the conversation reaches some size threshold.
#1215
opened Sep 19, 2024 by
stheobald
Previous Next
ProTip!
Updated in the last three days: updated:>2024-12-21.