This repository has been archived by the owner on Jul 12, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 17
Stable Diffusion WebUI Cloud Inference Tutorial
AnyISalIn edited this page Sep 4, 2023
·
17 revisions
- 1. Install sd-webui-cloud-inference
- 2. Get your Key
- 3. Enable Cloud Inference feature
- 4. Test Txt2Img
- 5. Advanced - Lora
- 7. Advanced - Img2img Inpainting
- 8. Advanced - VAE
- 9. Advanced - ControlNet
- 10. Advanced - Upscale and Hires.Fix
2. Get your omniinfer.io Key
Open omniinfer.io in browser
We can choice "Google Login" or "Github Login"
Let us back to Cloud Inference
tab of stable-diffusion-webui
Let us back to Txt2Img
tab of stable-diffusion-webui
From now on, you can give it a try and enjoy your creative journey.
Furthermore, you are welcome to freely discuss your user experience, share suggestions, and provide feedback on our Discord channel.
or you can use the VAE feature with X/Y/Z
The AUTOMATIC1111 webui loads the model on startup. However, on low-memory computers like the MacBook Air, the performance is suboptimal. To address this, we have developed a stripped-down minimal-size model. You can utilize the following commands to enable it.
its will reduce memory from 4.8G -> 739MB
- Download tiny model and config to model config.
wget -O ./models/Stable-diffusion/tiny.yaml https://github.com/omniinfer/sd-webui-cloud-inference/releases/download/tiny-model/tiny.yaml
wget -O ./models/Stable-diffusion/tiny.safetensors https://github.com/omniinfer/sd-webui-cloud-inference/releases/download/tiny-model/tiny.safetensors
- start webui with tiny model
--ckpt=/stable-diffusion-webui/models/Stable-diffusion/tiny.safetensors