-
Notifications
You must be signed in to change notification settings - Fork 826
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using llama3.2-vision:11b for the app agent #140
Comments
This is probably because the model you use is not strong enough. We feed the same prompts to all models. If the model fails to follow instruction, it may generate different output which we do not expect. |
Thanks for your help! I upgraded my VM and ran the 90b model with the same issue. The context window is 128K just like GPT so I wonder why it's ignoring the prompt. I think Ollama wants the image as a filename rather than bytes in the context window. Do you think that change might help?
|
Do we have to config for both host and app agent? |
How did you setup ollama, can u provide the endpoint you used in the config file and other info used in config file? |
I was running LLAMA locally, but I think it should be just /api/generate maybe? Sorry that environment is one now so it's only my memory |
I'm using GPT for my host agent and the response has all of the components I would expect
However, when I use Ollama as my app agent the responses are not in the format UFO expects, I don't have Observations or Thoughts or even plans. I do get a decent response from the llama
What could I be doing wrong? How does the AppAgent know to provide Thoughts and Observations?
The text was updated successfully, but these errors were encountered: