Running stable diffusion on an integrated AMD gpu (on Arch)
∞Lots of qualifiers, let’s see how this works.
First of all, here’s a much more detailed article, this just builds on that: https://www.gabriel.urdhr.fr/2022/08/28/trying-to-run-stable-diffusion-on-amd-ryzen-5-5600g/
The following AUR packages seem to be needed:
- rocm-device-libs
- rocm-llvm (4GB big?! only needed to build though)
- rocm-cmake
- rocminfo
- rocm-smi-lib
- hsa-rocr
- hsakmt-roct
- needs texlive-* stuff for build, can be removed after (e.g. using
makepkg --syncdeps --rmdeps
)
Sub-dependencies noted because I use makepkg
only, would be easier to install with an AUR wrapper/helper
and/or more disk space.
Installing torch goes as follows, assuming a venv at ./venv
:
$ ./venv/bin/pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2
Note that 5.2 seems to be the latest version available, I am not sure if that is compatible with rocm 5.4, which is what the AUR has.
And then you can check if torch thinks it has gpu acceleration support:
# should print 'True', otherwise something is up
$ HSA_OVERRIDE_GFX_VERSION=9.0.0 ./venv/bin/python -c 'import torch; print(torch.cuda.is_available())'
And then you can try to do the usual dance, e.g. using stable-diffusion-webui:
$ HSA_OVERRIDE_GFX_VERSION=9.0.0 TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2' ./venv/bin/python launch.py --precision full --no-half
Python 3.10.8 (main, Nov 1 2022, 14:18:21) [GCC 12.2.0]
Commit hash: 685f9631b56ff8bd43bce24ff5ce0f9a0e9af490
Installing gfpgan
Installing clip
Installing open_clip
Installing requirements for CodeFormer
Installing requirements for Web UI
Launching Web UI with arguments: --precision full --no-half
No module 'xformers'. Proceeding without it.
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading weights [7460a6fa] from /home/luna/t/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt
Applying cross attention optimization (Doggettx).
Model loaded.
Loaded a total of 0 textual inversion embeddings.
Embeddings:
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Don’t forget ./venv/bin/
in front of python
, that tripped me up a couple times.
See https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs for more tips and tricks about the webui thing.
Generating a batch of 10 images, this is what I got for gothy witch in the forest with a large pointed hat with stars on it, 4k, detailed, gloomy, cat sitting on a tree branch
:
Which shows that writing prompts is not trivial either, I suppose.
And that might be how you can get this to run on a laptop. Worked for me. The only thing left is to contemplate the ethics/motivations for replacing artists with “ai” magic.
I prefer humans, so here are some whose art I like: