[GUIDE] Stable Diffusion CPU, CUDA, ROCm with Docker-compose #5049
Replies: 18 comments 18 replies
-
Sorry for the ignorance, but what it is this? I mean, what is its purpose? |
Beta Was this translation helpful? Give feedback.
-
I'm experiencing the same issue. 5700xt/2700x. |
Beta Was this translation helpful? Give feedback.
-
Hi @catzy007 , thanks for the images code. Now when I am trying to reach the api on "http://127.0.0.1:7860/docs" I can't access it. I will be glad if you can post a video of the whole process? |
Beta Was this translation helpful? Give feedback.
-
Minor typo in above. |
Beta Was this translation helpful? Give feedback.
-
Thank you so much, for this work, i thought to give up to try a cpu version an then i found this thread. (Thank you very much for this) |
Beta Was this translation helpful? Give feedback.
-
thanks for the code it works, i tested cpu only. iam new to docker and dockercompose. is there a way to convert docker-compose.yml to simple dockerfile as i use caprover to manage docker and it cant handel docker-compose.yml. so when i try to conver docker-compose.yml to docker file using GPT it doesnt run. help appreciated |
Beta Was this translation helpful? Give feedback.
-
Thanks @catzy007 !!! Finally SD + ROCM working! I tried to update the dependencies from
but I get the error below, i possible to upgrade
|
Beta Was this translation helpful? Give feedback.
-
Used this method to create a CPU runner and it works very well. Only made 2 minor edits to the process. 1: Changed the TORCH_COMMAND in Dockerfile to:
2: Changed the second if statement in the docker-compose.yml starting command to:
|
Beta Was this translation helpful? Give feedback.
-
I'm trying to get this working on Windows11 Pro with WSL2 and everything seemed to be going okay other than during the Nvidia CUDA toolkit and drivers installation where the However I cannot bring the container up:
Maybe I have to reboot - I'm not sure how to get the kernel under WSL2 to restart without rebooting the entire PC and nothing docker, Nvidia, or CUDA related shows up when I type the command I'll try to post an update if I solve this. |
Beta Was this translation helpful? Give feedback.
-
Further update: The command I have to use is
I copied the model file into I will update as things happen (or don't). Update: Same result as before; it doesn't see the Stable Diffusion model in either place. Do I need to put it under |
Beta Was this translation helpful? Give feedback.
-
excellent article. Thank you !!! This is working really fine. One question only: why does it say that the extensions are disabled . This happens if we fo to the extensions tab and try to install from URL. |
Beta Was this translation helpful? Give feedback.
-
hello! @catzy007 PS: |
Beta Was this translation helpful? Give feedback.
-
docker-compose up stablediff-cpu ERROR: for 8951ca12cd43_stablediff-cpu-runner 'ContainerConfig' ERROR: for stablediff-cpu 'ContainerConfig' I have changed the .ckpt to .safetensors after do that run docker compose up command and I'm getting this error |
Beta Was this translation helpful? Give feedback.
-
I have question |
Beta Was this translation helpful? Give feedback.
-
I got a new error it is killing itself after generating image but not even showing on web
|
Beta Was this translation helpful? Give feedback.
-
This is pretty amazing! I used it to create a github repo that only runs cpu, so I can just pull it in portainer and it takes care of itself. Huge thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi. First of all BIG THANK for the GREAT GUIDE!!! Related with GPU use: From
I don't know if something was changed meanwhile in base image
And it seems that numpy 1.26.2 requires python 3.9: https://pypi.org/project/numpy/1.26.2/ Anyway, I am pretty sure this could be done in some better and more elegant way, but this was fastest for me and it is working - Dockerfile.cuda:
Thx |
Beta Was this translation helpful? Give feedback.
-
Ok, I would like to share once again my improvements. It shouldn't mean this fits to everyone, but this is what I wanted for my use case. Ok, wanted to run Automatic1111 Stable Diffusion in Docker, also with additional extensions sd-webui-controlnet and sd-webui-reactor. But struggled to make it work with GPU, because:
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
So decided to inspect Dockerfile.cuda from start, stage by stage manually, by installing things in container, and updating Dockerfile.cuda accordingly. This is what I got at the end, and it builds without any errors, sd-webui-controlnet and sd-webui-reactor are installing also without any errors (insightface pre-installed), also xformers installed and in use (wasn't case with original Dockerfile.cuda, at least on my end): What I noticed so far, are just few warnings on the first run, about few deprecation from torch.
FROM nvidia/cuda:12.6.1-base-ubuntu20.04
ARG DEBIAN_FRONTEND=noninteractive
ENV DEBIAN_FRONTEND=noninteractive \
PYTHONUNBUFFERED=1 \
PYTHONIOENCODING=UTF-8
WORKDIR /sdtemp
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y \
wget \
build-essential \
zlib1g-dev \
libncurses5-dev \
libgdbm-dev \
libnss3-dev \
libssl-dev \
libreadline-dev \
libffi-dev curl \
libsqlite3-dev \
libbz2-dev \
liblzma-dev \
libgl1-mesa-glx \
libglib2.0-0 \
git && \
wget https://www.python.org/ftp/python/3.10.6/Python-3.10.6.tgz && \
tar -xvzf Python-3.10.6.tgz && \
cd Python-3.10.6 && \
./configure && \
make -j $(nproc) && \
make install && \
cd .. && \
rm -rf Python-3.10.6*
RUN git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui /sdtemp
RUN python3 -m pip install --upgrade pip wheel
RUN python3 -m pip install insightface
ENV TORCH_COMMAND="pip install torch==2.4.0 torchvision==0.19.0 xformers --extra-index-url https://download.pytorch.org/whl/cu121"
RUN python3 -m $TORCH_COMMAND
RUN python3 launch.py --skip-torch-cuda-test --exit
WORKDIR /stablediff-web So we can see^^, I decided to:
INCOMPATIBLE PYTHON VERSION
This program is tested with 3.10.6 Python, but you have 3.12.3.
If you encounter an error with "RuntimeError: Couldn't install torch." message,
or any other error regarding unsuccessful package (library) installation,
please downgrade (or upgrade) to the latest version of 3.10 Python
and delete current Python and "venv" folder in WebUI's directory.
You can download 3.10 Python from here: https://www.python.org/downloads/release/python-3106/ I decided to compile Python 3.10.6 manually
services:
stablediff-cuda:
build:
context: .
dockerfile: Dockerfile.cuda
container_name: stablediff-cuda-runner
restart: unless-stopped
runtime: nvidia
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
environment:
TZ: "Europe/Belgrade"
NVIDIA_VISIBLE_DEVICES: all
COMMANDLINE_ARGS: "--listen"
entrypoint: ["/bin/sh", "-c"]
command: >
"nvidia-smi; . /stablediff.env; echo launch.py $$COMMANDLINE_ARGS;
if [ ! -d /stablediff-web/.git ]; then
cp -a /sdtemp/. /stablediff-web/
fi;
if [ ! -f /stablediff-web/models/Stable-diffusion/*.ckpt ]; then
echo 'Please copy stable diffusion model to stablediff-models directory'
echo 'You may need sudo to perform this action'
exit 1
fi;
python3 launch.py"
ports:
- "7860:7860"
volumes:
- ./stablediff.env:/stablediff.env
- ./stablediff-web:/stablediff-web
- ./stablediff-models:/stablediff-web/models/Stable-diffusion
- ./controlnet-models:/stablediff-web/models/ControlNet
- ./lora-models:/stablediff-web/models/Lora
- ./outputs:/stablediff-web/outputs ^^Not changed to much. Just created mounts on host system for some additional folders, because I like it more this way, and TBH I am not sure at all is the part with
^^I'm running app with above command line (poor GPU), and still experimenting with torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 26.00 MiB. GPU 0 has a total capacity of 3.63 GiB of which 34.75 MiB is free. Process 76491 has 3.59 GiB memory in use. Of the allocated memory 2.77 GiB is allocated by PyTorch, and 9.28 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Once again - I don't want to say this is the best approach. I am n00b for AI image, and I don't know that. Just want to share something what perfectly working for me at the moment, maybe will be useful to someone. @catzy007 : Thank you once again for great Docker concept/configuration - this is the best I found on internet for Automatic1111 Stable Diffusion. |
Beta Was this translation helpful? Give feedback.
-
[UPDATE 28/11/22] I have added support for CPU, CUDA and ROCm. CPU and CUDA is tested and fully working, while ROCm should "work".
Preparing your system
Install docker and docker-compose and make sure docker-compose version 1.72.0 or later is installed.
Then install NVIDIA Container Toolkit or Follow ROCm Docker Quickstart.
For windows, follow CUDA on WSL User Guide then Enabling the Docker Repository and Installing the NVIDIA Container Toolkit
Initial set-up
Create new directory and save the following files.
docker-compose.yml
Dockerfile.cpu
Dockerfile.cuda
Dockerfile.rocm
stablediff.env
.dockerignore
Building docker image
This will download and install the required packages.
Setting up launch parameter
Edit
stablediff.env
to match your use cases.You can also add launch parameter such as
--lowvram
.Initial run
This will create two directory called
stablediff-web
andstablediff-models
.After initial run, you will get message about missing Stable Diffusion ckpt model.
Grab one and copy it to
stablediff-models
. in linux, you might needsudo
todo this.
Subsequent run
Use the command below every time you want to run Stable Diffusion.
Stopping Stable Diffusion
To stop Stable Diffusion press
Ctrl + C
and use the command below.Testing
I run this using Ubuntu 22.04 with Dual Xeon X5670, 12 GB RAM and Polaris 11 4 GB GPU.
Generating 512x512 image using CPU i get around
17.44s/it
using around 8~9 GB RAM.I also run this using Ubuntu 20.04 with Maxwell 2 GB GPU. Generating 512x512 image,
i get around
5.12s/it
.If you managed to use this method please comment below.
Also this method should work with windows, personally i have not try it but i would like to hear that.
Beta Was this translation helpful? Give feedback.
All reactions