AMD 7900xt GPU (gfx1100
docker image buid) workaround to run SD
#8172
Nikoos
started this conversation in
Show and tell
Replies: 1 comment 3 replies
-
晚上回去试试这个方案,19:30到货的7900xt,折腾到凌晨3点,都没有运行起来,要么找不到GPU,找到GPU了,又出现其他错误。都有点后悔为什么不买4080 |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello Team,
After purchasing my first AMD GPU (7900xt) I hit a big stop when I tried to run SD on my new config. After some troubleshooting/reading lot of github issues, I was able to build an unstable (really ! very unstable) docker image with 7900xt (
gfx1100
GPU) support for ROCM and Pytorch.If you want to give a try, here is the actual workflow to build this docker image.
Note: you'll need to have at least up to 40GB of freespace.
Note 2: I am no docker specialist, nor pytorch. This workflow can be simplified, however as it's a community discussion, I hope someone more knowledgeable than me could simplified it
I prefer to re-write: at the moment it's very buggy and unstable so be prepare to have issues.
If you want to try by yourself, here is how I build my image:
git clone --recursive https://github.com/pytorch/pytorch
.ci/docker/
there is a file namedbuild.sh
, at line 203, you'll have the following esac statement:Just after the
;;
Add the following block:So the complete output starting at line 194 is:
Save your file.
3. Now in the
uubntu-rocm
subfolder of.ci/docker/
you'll have theDockerfile
that will be used to build your image. You'll have to add three lines at the bottom of this file:Save your file.
I think that part can be optimized to reduce docker image size
4. Now go back to your previous folder and launch the following command (I assume your user is in the
docker
group):The build process will be launched (it takes up to 40 minutes on my computer).
5. If the build process finished correctly, you'll have a huge image:
Time to tag the image named
tmp.7iqw2ylj9q
with a more human readable name:/home/YOURUSER/stable-diffusion-webui/
by your username):/dev/dri/renderXXX
by the one mounted in your docker container.rocminfo
you must see your device detected, if not, no need to continue, you won't be able to run SD.Example:
9
is OK, let's test with pytorch to confirm that it's also working, run thepython3
interpreter and type the following code:It will load torch python module, then to check if torch is able to work your GPU, you can use the following command (note: the
>>>
should not be typed):If your GPU appears, you're good to go to the next step.
11. Let's git clone stable diffusion repo in your
$HOMEDIR
(in my case/var/lib/jenkins
) :You'll have a very long list of python packages, that list, should be identical in SD's venv, to confirm that, source SD's venv
source venv/bin/activate
Then, if you use the same command as above
pip list
you must have the same content, if it's identical, you can go to next and final step : updating sd to listen on 0.0.0.014. Edit
webui-user.sh
and replace the following line (line 13):#export COMMANDLINE_ARGS=""
by
(note: if you want to install extension, I found that option
--enable-insecure-extension-access
should be added to this command line too).15. All modifications are done, now it's time to start SD with
./webui.sh
Regards,
Nikos
Beta Was this translation helpful? Give feedback.
All reactions