-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with using Ollama/codellama as local LLM engine in Ubuntu 24.04 #3196
Comments
I'll see if I can get someone to take a look at this. |
As a replied to the thread of the kevin-support-bot: "Probably, it may be a duplicate of OpenDevin#2844. However, I'm not (yet) interested in to setup a local development environment to get things running." |
Same issue trying the following:
|
Easy as pie. I'm a FreeBSD guy, so my Docker-fu is very weak and I don't really use Ollama, but with LocalAI, it is a cinch. Normally, I run the build version but tried this on an Ubuntu 22 LTS, a fresh Mint and just firing it up on Debian: Grab LocalAI via the bash installer if you really want easy: curl https://localai.io/install.sh | sh You can specify options while grabbing, but if you don't, the script will try to do the right thing (ie, if you have Docker and CUDA runtime container toolkit, it'll go that route) Fair warning, the image is pretty large, cuz it has a lot pre-baked in (tts, stt, text to image gen, etc etc). As far as I know, LocalAI was the first and only opensource project to maintain feature parity with OpenAI API nearly from the beginning, while it seems others are bolting on and starting to catch up, LocalAI is maturing on all fronts and even has a simple and elegant web-ui for browsing models, quick prompt/inference testing, etc. Or, you could download the binary from the Github. Or, if you are a very high level Warlock, you could attempt to build from source, but be warned, many submodules make it sometimes tricky! At any rate, once the script is finished, it will tell you LocalAI is up and running on *:8080, if you didn't change it. Now, what I usually do is have LiteLLM proxy running: litellm --model openai/gpt-3.5-turbo --host 0.0.0.0 --port 11434 --api_base 'http://10.10.10.10:8080/v1' --detailed_debug --alias synthia34b-awq That way, I can swap llm's out underneath whatever app(s) on whichever boxen. But you could do an ssh -L proxy if you need to work around docker host internal networking issues: ssh -L 0.0.0.0:11434:192.168.50.65:8080 localhost -N Here's an OpenDevin startup that will have you right as rain: #!/usr/bin/env bash
export OPENAI_API_BASE=http://192.168.50.41:8080/v1
export WORKSPACE_BASE=/home/matto/workspace
docker run \
-it \
--add-host host.docker.internal:host-gateway \
-e SANDBOX_USER_ID="opendevin" \
-e LLM_API_KEY="stfu" \
-e LLM_MODEL="openai/gpt-3.5-turbo" \
-e LLM_BASE_URL="http://172.18.0.1:11434" \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
ghcr.io/opendevin/opendevin:latest
# Plus whatever else you crazy Docker kids do!
##--pull=always
#-e SANDBOX_USER_ID=$(id -u) FYI: I'm sure I am doing it wrong, but when wrong works right, I'm ok with it. I'll see about getting some of my MemGPT prompts going in here and post if I find anything interesting :D |
Thank you for pointing me in the direction of using LiteLLM as a proxy between external/local LLM's and OpenDevin. it did the trick getting codellama and OpenDevin talking together. /Henrik |
And then after some research: Not I wonder why the above environment variables aren't reflected in the OpenDevin's Settings, due to that my observations are that OpenDevin depends on settings, from the Settings menu, regardless of the above given environment variables? |
There is some discussion on the ultimate configs OpenDevin uses: #3220 I believe the settings that the UI has is the most important, then whatever is in the configs. So yes some of the settings that are in the UI will override the configs... |
I had the same issue with OpenDevin not communicating with my local Llama llm. I had to start the ollama serve with the following: OLLAMA_HOST=0.0.0.0:11434 OLLAMA_ORIGINS=* ollama serve which allows communications with docker containers. Then run the rest as usual: docker run |
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days. |
This issue was closed because it has been stalled for over 30 days with no activity. |
Is there an existing issue for the same bug?
Describe the bug
Hi
I'm trying to get OpenDevin to communicate with codellama:7b.
I'm using this linux bash cli statement:
with the environment variable
WORKSPACE_BASE
pointing to a local test project.OpenDevin starts and seems to be ready to take input from the user. But when the user gives input like "hi", after a long time, OD replies with:
There was an unexpected error while running the agent
and a lot of error messages in its shell:
Yes. the local ollama LLM is running and I'm able to "talk" with through it's prompt.
May someone help me figure out what i'm doing wrong?
/Henrik
Current OpenDevin version
Installation and Configuration
Model and Agent
Operating System
Linux Ubuntu 24.04
Reproduction Steps
Logs, Errors, Screenshots, and Additional Context
No response
The text was updated successfully, but these errors were encountered: