Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Error when trying to pull all docker evaluation containers (SWEBench) #4242

Closed
2 tasks done
AlexCuadron opened this issue Oct 7, 2024 · 1 comment
Closed
2 tasks done
Labels
bug Something isn't working

Comments

@AlexCuadron
Copy link
Contributor

Is there an existing issue for the same bug?

Describe the bug

I try to pull all the docker containers using: evaluation/swe_bench/scripts/docker/pull_all_eval_docker.sh instance lite as described in the documentation. However I run into the following error:

❯ ./evaluation/swe_bench/scripts/docker/pull_all_eval_docker.sh instance lite
Pulling images from ./evaluation/swe_bench/scripts/docker/all-swebench-lite-instance-images.txt
Pulling docker images for [instance] level
Pattern: sweb.base\|sweb.env\|sweb.eval
Image file: ./evaluation/swe_bench/scripts/docker/all-swebench-lite-instance-images.txt
Pulling lite/sweb.base.x86_64:latest into sweb.base.x86_64:latest
Error response from daemon: pull access denied for lite/sweb.base.x86_64, repository does not exist or may require 'docker login': denied: requested access to the resource is denied

Current OpenHands version

0.9.7

Installation and Configuration

git clone https://github.com/AlexCuadron/OpenHands.git
cd OpenHands
git remote add upstream [email protected]:All-Hands-AI/OpenHands.git
git remote -v
git fetch upstream\ngit checkout main\ngit merge upstream/main\ngit push origin main
conda create --name open-hands python=3.11 conda-forge::nodejs conda-forge::poetry
conda activate open-hands
brew install netcat
pip install git+https://github.com/OpenDevin/SWE-bench.git@7b0c4b1c249ed4b4600a5bba8afb916d543e034a
make build
poetry self update
make build
make setup-config
Setting up config.toml...
Enter your workspace directory (as absolute path) [default: ./workspace]:
Enter your LLM model name, used for running without UI. Set the model in the UI after you start the app. (see https://docs.litellm.ai/docs/providers for full list) [default: gpt-4o]: o1-mini
Enter your LLM api key:
Enter your LLM base URL [mostly used for local LLMs, leave blank if not needed - example: http://localhost:5001/v1/]:
Enter your LLM Embedding Model
Choices are:
  - openai
  - azureopenai
  - Embeddings available only with OllamaEmbedding:
    - llama2
    - mxbai-embed-large
    - nomic-embed-text
    - all-minilm
    - stable-code
    - bge-m3
    - bge-large
    - paraphrase-multilingual
    - snowflake-arctic-embed
  - Leave blank to default to 'BAAI/bge-small-en-v1.5' via huggingface
>
Config.toml setup completed.

export DEBUG=1

Model and Agent

  • Model: O1-Mini

Operating System

Kubuntu 22.04.4 LTS x86_64

Reproduction Steps

Run: evaluation/swe_bench/scripts/docker/pull_all_eval_docker.sh instance lite

Logs, Errors, Screenshots, and Additional Context

No response

@AlexCuadron
Copy link
Contributor Author

Fixed in #4244

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant