You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Your current code uses OpenAI’s API key to access the LLM service by default. I’d like to switch it to use a local LLM, which I have deployed through LLaMA-Factory and is accessible via a local API, for example, at http://localhost:7788/v1/. Could you guide me on how to make this adjustment? Thank you!
Upvote & Fund
We're using Polar.sh so you can upvote and help fund this issue.
We receive the funding once the issue is completed & confirmed by you.
Thank you in advance for helping prioritize & fund our backlog.
The text was updated successfully, but these errors were encountered:
@deeper-coder we need function calling, if you can get a function calling model to work reliably then it will work. But you need a class with a run(task: str) or __call__(task: str) method to integrate into the ToTAgent class
I plan to use llama3 70B and I noticed that in the OpenAIFunctionCaller class, you’ve implemented the run method as shown in the image. So, can I achieve my desired functionality by passing base_url = "http://localhost:7788/v1/" in **kwargs?
@deeper-coder we need function calling, if you can get a function calling model to work reliably then it will work. But you need a class with a run(task: str) or __call__(task: str) method to integrate into the ToTAgent class
Your current code uses OpenAI’s API key to access the LLM service by default. I’d like to switch it to use a local LLM, which I have deployed through LLaMA-Factory and is accessible via a local API, for example, at http://localhost:7788/v1/. Could you guide me on how to make this adjustment? Thank you!
Upvote & Fund
The text was updated successfully, but these errors were encountered: