You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Notice the llama-index requirement. I then added the pipeline through the UI and checked the installed modules by doing docker exec pipelines pip list | grep llama-index-vector-stores-qdrant and it would't find it. I am not sure if its an issue with Pipelines or WebUI.
I also can't use PIPELINES_URLS because i would have to upload the pipeline publicly which isn't an option.
The text was updated successfully, but these errors were encountered:
From what I can tell the PIPELINE_URLS is the only way to do this currently.
But you don't have to upload something publicly for that:
You can use a private gist that only works when you know the link
You start a minimal http server and host it yourself for the few seconds its starting up, just ask chatgpt or any LLM of your choice to give you a simple single file http server
I can't find a way to upload a pipeline with dependencies.
I created this test pipeline:
test.py:
Notice the llama-index requirement. I then added the pipeline through the UI and checked the installed modules by doing
docker exec pipelines pip list | grep llama-index-vector-stores-qdrant
and it would't find it. I am not sure if its an issue with Pipelines or WebUI.I also can't use
PIPELINES_URLS
because i would have to upload the pipeline publicly which isn't an option.The text was updated successfully, but these errors were encountered: